diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeavm" "b/data_all_eng_slimpj/shuffled/split2/finalzzeavm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeavm" @@ -0,0 +1,5 @@ +{"text":"\\section*{Abstract}\nDetermining how synaptic coupling within and between regions is modulated during sensory processing is an important topic in neuroscience. Electrophysiological recordings provide detailed information about neural spiking but have traditionally been confined to a particular region or layer of cortex. \nHere we develop new theoretical methods to study interactions between and within two brain regions, based on experimental measurements of spiking activity simultaneously recorded from the two regions. By \nsystematically comparing experimentally-obtained spiking statistics to (efficiently computed) model spike rate statistics, we identify regions in model parameter space that are consistent with the experimental \ndata. We apply our new technique to dual micro-electrode array \\textit{in vivo} recordings from two distinct regions: olfactory bulb ({\\bf OB}) and anterior piriform cortex ({\\bf PC}). \nOur analysis predicts that: i) inhibition within the afferent region (OB) has to be weaker than the inhibition within PC, ii) excitation from PC to OB is generally stronger than excitation from OB to PC, \niii) excitation from PC to OB and inhibition \nwithin PC have to both be relatively strong compared to presynaptic inputs from OB. \nThese predictions are validated in a spiking \nneural network model of the OB--PC pathway that satisfies the many constraints from our experimental data. \nWe find when the derived relationships are violated, the spiking statistics no longer satisfy the constraints from the data. In principle this modeling framework can be adapted to other systems and be used to investigate \nrelationships between other neural attributes besides network connection strengths. \nThus, this work can serve as a guide to further investigations into the relationships of various neural attributes within and across different regions during sensory processing.\n \n\n\\section*{Author Summary}\n\n\nSensory processing is known to span multiple regions of the nervous system. \nHowever, electrophysiological recordings during sensory processing have traditionally been limited to a single region or brain layer. \nWith recent advances in experimental techniques, recorded spiking activity from multiple regions simultaneously is feasible. However, other important quantities--- such as inter-region connection strengths --- \ncannot yet be measured. \nHere, we develop new theoretical tools to leverage data obtained by recording from two different brain regions simultaneously. We address the following questions: \nwhat are the crucial neural network attributes that enable sensory processing across different regions, and how are these attributes related to one another? \n\nWith a novel theoretical framework to efficiently calculate spiking statistics, we can characterize a high dimensional parameter space that satisfies \ndata constraints. We apply our results to the olfactory system to make specific predictions about effective network connectivity. \nOur framework relies on incorporating relatively easy-to-measure quantities to predict hard-to-measure interactions across multiple brain regions. \nBecause this work is adaptable to other systems, we anticipate it will be a valuable tool for analysis of other larger scale brain recordings. \n\n\n\\section*{Introduction}\n\nAs experimental tools advance, measuring whole-brain dynamics with single-neuron resolution becomes closer to reality~\\cite{prevedel14,ahrens13,lemon15,brainInit_13}. \nHowever, a task that remains technically elusive is to measure the interactions within and across brain regions that govern such system-wide dynamics. \nHere we develop a theoretical approach to elucidate such interactions based on easily-recorded properties such as mean and (co-)variance of firing rates, when they can be measured in multiple regions and in multiple activity states. \nAlthough previous theoretical studies have addressed how spiking statistics depend on various mechanisms~\\cite{brunel,brunelhakim,doiron16,BarreiroLy_RecrCorr_17}, these studies have typically been limited to a single region, \nleaving open the challenge of how inter-regional interactions impact the system dynamics, and ultimately the coding of sensory signals ~\\cite{zohary94,bair01,ecker11,moreno14,kohn16}.\n\nAs a test case for our new theoretical tools, we studied interactions in the olfactory system. We used two micro-electrode arrays to simultaneously record from olfactory bulb ({\\bf OB}) and anterior piriform cortex ({\\bf PC}). \nConstrained by these experimental data, we developed computational models and theory to investigate interactions within and between OB and PC. \nThe modeling framework includes two distinct regions: a \nnetwork that receives direct sensory stimuli (here, the OB), and a second neural network (PC) that is reciprocally coupled to the afferent region. Each region contains multiple individual populations, each of which is modeled with a firing rate model~\\cite{wilsoncowan1}; thus even this minimal model involves several coupled stochastic differential equations (here, six) and has a large-dimensional parameter space.\nAnalysis of this system would be unwieldy in general; we address this by developing a novel method to compute firing statistics that is computationally efficient, captures the results of Monte Carlo simulations, and can provide analytic insight. \n\nThorough analysis of experimental data in both the spontaneous and stimulus-evoked states leads to a number of constraints on first- and second-order spiking statistics--- many of which could not be observed using data \nfrom just one micro-electrode array. In particular, we find twelve (12) constraints that are consistent across different odorant stimuli. We use our theory and modeling to study an important subset of neural attributes (synaptic strengths) \nand investigate what relationships, if any, must be satisfied in order to robustly capture the many constraints observed in the data. We find that: i) inhibition within OB has to be weaker than the inhibition in PC, \nii) excitation from PC to OB is generally stronger than excitation from OB to PC, iii) excitation from PC to OB and inhibition within PC have to both be relatively strong compared to inputs originating in OB (inhibition within OB and excitation from OB to PC). \nWe validate these guiding principles in a large spiking neural network (leaky integrate-and-fire, or {\\bf LIF}) model,\nby showing that the many constraints from the experimental data are {\\it all} satisfied. \nFinally, we demonstrate that violating these relationships in the LIF model results in spiking statistics that do not satisfy all of the data constraints.\n\n\nOur predictions provide insights into interactions in the olfactory system that are difficult to directly measure experimentally. Importantly, these predictions were inferred from spike rates and variability, which are relatively easy to measure. \nWe believe that the general approach we have developed -- using easy-to-measure quantities to predict hard-to-measure interactions -- will be valuable in diverse future investigations of how whole-brain function \nemerges from interactions among its constituent components.\n\n\n\\section*{Results}\n\nOur main result is the development of a theoretical framework to infer hard-to-measure connection strengths in a minimal firing rate model, constrained by spike count statistics from \nsimultaneous array recordings. \n\nWe performed simultaneous dual micro-electrode recordings in the olfactory bulb ({\\bf OB}) and the anterior piriform cortex ({\\bf PC})\n\n(see {\\bf Materials and Methods}). First, we use the experimental data to \ncompute population-averaged (across cells or cell pairs) first and second order spike count statistics, \ncomparing across regions (OB or PC) and activity states (spontaneous or stimulus-evoked). \nWe use these statistics to \nconstrain a minimal firing rate model of the coupled OB-PC system, aided by a quick and efficient method for calculating firing statistics without Monte Carlo simulations. \n\nAs a test case for our methods, we investigate the structure of four important parameters: within-region inhibitory connection strengths and between-region excitatory connection strengths. We find several relationships that must hold, in\norder to satisfy all constraints from the experimental data. These results are then validated with a large \nspiking network of leaky integrate-and-fire ({\\bf LIF}) model neurons. \n\n\\subsection*{Consistent Trends in the Experimental Data}\n\n\\begin{figure}[!h]\n\\centering\n\t\\includegraphics[width=5.25in]{Fig1-eps-converted-to.pdf}\n\\caption{{\\bf Population firing rates in anterior piriform cortex (PC) and olfactory bulb (OB) from simultaneous dual array recordings.}\n(A) Trial-averaged population firing rate in time from 73 PC cells (38 and 35 cells from two recordings). The inset shows a closeup view, to highlight the distinction between spontaneous and evoked states. \n(B) Trial-averaged population firing rate in time from 41 OB cells (23 and 18 cells from two recordings). Inset as in (A); both (A) and (B) use 5\\,ms time bins. \n(C) The PC firing rate (averaged in time and over trials) of individual cells in the spontaneous (black) and \nevoked states (red). The arrows indicate the mean across 73 cells; the mean$\\pm$std. dev. in the spontaneous state is: $0.75\\pm 0.93$\\,Hz, in the evoked state is: $1.5\\pm1.6$\\,Hz. \n(D) Similar to (C), but for the OB cells described in (B). The mean$\\pm$std. dev. in the spontaneous state is: $2\\pm3.3$\\,Hz, in the evoked state is: $4.7\\pm7.1$\\,Hz.}\n\\label{fig1}\n\\end{figure}\n\nWe first present our data from simultaneous dual micro-electrode array recordings in anesthetized rats. \nDuring each 30-second trial an odor was presented for roughly one second; recordings continued for a total of 30 seconds. \nThis sequence was repeated for 10 trials with 2-3 minutes in between trials; the protocol was repeated for another odor. \nRecordings were processed to extract single-unit activity; the number of units identified was: 23 in OB and 38 in PC (first recording, two odors), 18 in OB and 35 in PC (second recording, another two odors). In total, there were four\ndifferent odors presented.\n\nIn this paper, we focus on the spike count statistics rather than the detailed temporal structure of the neural activity (Fig~\\ref{fig1}A--B). We divided each 30\\,s trial into two segments, representing the odor-{\\bf evoked} state (first 2 seconds) and the {\\bf spontaneous} state (remaining 28 seconds). We computed first- and second-order statistics for identified units; i.e., firing rate \n$\\nu_k$, spike count variance, and spike count covariance (we also computed two derived statistics, Fano Factor and Pearson's correlation coefficient, for each cell or cell pair). Spike count variances, covariances and correlations were computed \nusing time windows $T_{win}$ ranging between 5\\,ms and 2\\,s. In computing population statistics we distinguished between different odors (four total), different regions (OB vs. PC), and different activity states (spontaneous vs. evoked); \notherwise, we assumed statistics were stationary over time.\n\nWe then sought to identify relationships among these standard measures of spiking activity. For example, we found that mean firing rate of OB cells in the evoked state was higher than the mean firing rate in the spontaneous state, or $\\nu_{OB}^{Ev} > \\nu_{OB}^{Sp}$ (although there is significant variability across the population, we focus on population-averaged statistics here).\nWe found twelve (12) robust relationships \nthat held across all odors. \nTable~\\ref{table:constr} summarizes the consistent \nrelationships we found in our data, and Fig~\\ref{fig1}C--D, Fig~\\ref{fig2}, Fig~\\ref{fig3} show the data exhibiting these relationships when combining all odorant stimuli (see \\nameref{S1_file} for statistics plotted by distinct odors). \nThroughout the paper, when comparing activity states the spontaneous state is in black and the evoked state in red; when comparing regions the OB cells are in blue and PC cells in green. \n\n\n\\begin{figure}[!h]\n\\centering\n\t\\includegraphics[width=5.25in]{Fig2-eps-converted-to.pdf}\n\\caption{{\\bf A subset of the important relationships between the spiking statistics in spontaneous and evoked states.}\nConsistent trends that hold for {\\it all} 4 odorant stimuli in the experimental data. Each panel shows two spike count statistics, as a function of the time window. \nThe shaded error bars show the \\textit{standard error of the mean} above and below the mean statistic.\n(A) Stimulus-induced decorrelation of PC cell pairs (red) compared to the spontaneous state (black).\n(B) The variability in PC (measured by Fano Factor) is lower in the evoked state (red) than in the spontaneous state (black). \n(C) In the spontaneous state, the average correlation of PC pairs (green) is \\textit{higher} than that of OB pairs (blue).\n(D) In the evoked state, the average correlation of PC pairs (green) is \\textit{lower} than that of OB pairs (blue), for long time windows. There were 406 total OB pairs and 1298 total PC pairs. (Although the trends reverse in (A) and (D) for smaller time windows, our focus is on the larger time windows because stimuli were held for 1\\,s; smaller time windows are shown for completeness.)}\n\\label{fig2}\n\\end{figure}\n\n\n\\begin{table}[!ht]\n\\centering\n\\caption{ {\\bf The 12 relationships (constraints) that hold in the experimental data across all odors.}}\n\\label{table:constr}\n\\begin{tabular}{l|l|l|l|}\n\\cline{2-4}\n & \\textbf{Spont.} & \\textbf{Evoked} & \\textbf{Spon. to Evoked} \\\\ \\thickhline\n\\multicolumn{1}{|l|}{} & & & $\\nu_{PC}^{Sp}<\\nu_{PC}^{Ev}$ \\\\ \\cline{4-4} \n\\multicolumn{1}{|l|}{\\multirow{-2}{*}{\\textbf{Firing Rate}}} & \\multirow{-2}{*}{$\\nu_{PC}<\\nu_{OB}$} & \\multirow{-2}{*}{$\\nu_{PC}<\\nu_{OB}$} & $\\nu_{OB}^{Sp}<\\nu_{OB}^{Ev}$ \\\\ \\hline\n\\multicolumn{1}{|l|}{} & \\cellcolor[HTML]{C0C0C0} & $\\text{Var}_{PC}<\\text{Var}_{OB}$ & $\\text{Var}_{OB}^{Sp}<\\text{Var}_{OB}^{Ev} $ \\\\ \\cline{2-4} \n\\multicolumn{1}{|l|}{\\multirow{-2}{*}{\\textbf{Variability}}} & $FF_{PC}>FF_{OB}$ & \\cellcolor[HTML]{C0C0C0} & $FF_{PC}^{Sp}>FF_{PC}^{Ev}$ \\\\ \\hline\n\\multicolumn{1}{|l|}{} & \\cellcolor[HTML]{C0C0C0} & $\\text{Cov}_{PC}<\\text{Cov}_{OB}$ & \\cellcolor[HTML]{C0C0C0} \\\\ \\cline{2-4} \n\\multicolumn{1}{|l|}{\\multirow{-2}{*}{\\textbf{Co-variability}}} & $\\rho_{PC}>\\rho_{OB}$ & $\\rho_{PC}<\\rho_{OB}$ & $\\rho_{PC}^{Sp}>\\rho_{PC}^{Ev}$ \\\\ \\hline\n\\end{tabular}\n\\begin{flushleft} Relationships between population-averaged statistics (averages are across all cells or cell pairs) that were consistent across all odors. Other possible relationships were left out because they were \nambiguous and\/or odor dependent.\n\\end{flushleft}\n\\end{table}\n\nA common observation across different animals and sensory systems, is that firing rates increase in the evoked state (see, for example, Figure 3 in~\\cite{churchland10}). \nIndeed, we observed that \naverage firing rates in both the OB and PC were higher in the evoked state than in the spontaneous state (Fig~\\ref{fig1}C--D). \nFurthermore, the firing rate in the OB was larger than the firing rate in the PC, in both spontaneous and evoked states (see mean values in Fig~\\ref{fig1}C--D). \n\n\\textit{Stimulus-induced decorrelation} appears to be a widespread phenomena in \nmany sensory systems and in many animals~\\cite{doiron16}; stimulus-induced decorrelation was previously reported in PC cells under different experimental conditions~\\cite{miura12}.\nHere, we found that in the PC, the average spike count correlation is lower in the evoked state (red) than in the spontaneous state (black), at least for time windows of 0.5\\,s and above (Fig~\\ref{fig2}A). \nAlthough we show a range of time windows for completeness, \nwe focus on the larger time windows because in our experiments the odors are held for 1\\,s; furthermore, our theoretical methods only address long time-averaged spiking statistics. \nNote that stimulus-induced decorrelation in the OB cells was not consistently observed across odors. \n\nAnother common observation in cortex, is for variability to decrease at the onset of stimulus~\\cite{churchland10}: \nin Fig~\\ref{fig2}B we see that the Fano Factor of spike counts in PC cells decreases in the evoked state (red) compared to the spontaneous state (black); note that other experimental labs have also\nobserved this decrease in the Fano factor of PC cells (see supplemental figure S6D in~\\cite{miura12}). \nFig~\\ref{fig2}C--D shows a comparison of PC and OB spike count correlation in the spontaneous state and evoked state, respectively. Spike count correlation in PC (green) \nis larger than correlation in OB (blue) in the spontaneous state, but in the evoked state the relationship switches, at least for time windows larger than 0.5\\,sec. \n\n\\begin{figure}[!h]\n\\centering\n \\includegraphics[width=5.25in]{Fig3-eps-converted-to.pdf}\n\\caption{\\label{fig3} Showing the other trends from the experimental data that are consistent with all odors and for all time windows. The shaded error bars show the \\textit{standard error of the mean} above and below the mean statistic.\n(A) Fano Factor of spontaneous activity is larger in PC (green) than in OB (blue). (B) The spike count variance in the evoked state is smaller in PC (green) than in OB (blue). \n(C) Spike count covariance in the evoked state is smaller in PC (green) than in OB (blue). (D) In OB cells, the evoked spike count variance (red) is larger than the spontaneous (black). \nThe number of cells and number of pairs are the same as in Fig~\\ref{fig2}. Throughout we scale spike count variance and covariance by time window $T$ for aesthetic reasons. \n} \n\\end{figure}\n\nFig~\\ref{fig3} shows the four remaining constraints that are consistent for all odors and for all time windows. The Fano Factor in PC (green) is larger than in OB (blue), in the spontaneous state (Fig~\\ref{fig3}A); spike count variance in PC (green) \nis smaller than in OB (blue) in the evoked state (Fig~\\ref{fig3}B); spike count covariance in PC (green) is smaller than in OB (blue) in the evoked state (Fig~\\ref{fig3}C); and in OB the spike count variance in the evoked state (red) is larger than spontaneous (black, Fig~\\ref{fig3}D). \nThroughout the paper, we scale the spike count variance and covariance by time window for aesthetic reasons; \nthis does not affect the relative relationships.\n\n\\subsection*{A Minimal Firing Rate Model to Capture Data Constraints}\n\nWe model two distinct regions (OB and PC) with a system of six (6) stochastic differential equations, each representing the averaged activity of a neural \npopulation~\\cite{wilsoncowan1} or representative cell (see Fig~\\ref{fig4} for a schematic of the network). \nFor simplicity, in this section we use the word ``cell\" to refer to one of these populations. Each region has two excitatory ({\\bf E}) and one inhibitory ({\\bf I}) cell to account for a variety of spiking correlations. \n\nWe chose to include two E cells for two reasons: first, excitatory cells are the dominant source of projections between regions; we need at least two E cells to compute an E-to-E correlation. Moreover, in our experimental data, we are most likely \nrecording from excitatory mitral and tufted cells (we do not distinguish between mitral vs tufted here, and therefore refer to them as M\/T cells); therefore, the experimental measurements of correlations are likely to have many E-to-E correlations. \nThe arrays likely record from I cell spiking activity as well, and the inclusion of the I cell is also important for capturing the stimulus-induced decreases in correlation and Fano factor~\\cite{churchland10,doiron16} \n(also see~\\cite{LMD_whisker_12} who similarly used these same cell types to analyze spiking correlations in larger spiking network models).\n\nWe use $j\\in\\{1,2,3\\}$ to denote three OB ``cells\" and $j\\in\\{4,5,6\\}$ for three PC cells, with $j=1$ as the inhibitory OB granule cell and $j=4$ as the inhibitory PC cell.\nThe equations are:\n\\begin{eqnarray}\\label{eqn:gen_WC_pop}\n\t\\tau \\frac{d x_j}{dt} & = & -x_j + \\mu_j + \\sigma_j \\eta_j + \\sum_k g_{jk} F(x_k) \\label{eqn:dxdt_SDE}\n\\end{eqnarray}\nwhere $F(x_k)$ is a transfer function mapping activity to firing rate. Thus, the firing rate is:\n\\begin{equation}\n\t\\nu_j = F(x_j). \n\\end{equation}\nWe set the transfer function to $F(X)=\\frac{1}{2}\\left(1+\\tanh((X-0.5)\/0.1) \\right)$, a commonly used sigmoidal function~\\cite{wilsoncowan1} for all cells; experimental recordings of this function demonstrate it can be sigmoidal~\\cite{fellous03,prescott03,cardin08}. \nAll cells receive noise $\\eta_j$, the increment of a Weiner process, uncorrelated in time but correlated within a region: i.e. $\\langle \\eta_j(t) \\rangle = 0$, $\\langle \\eta_j(t) \\eta_j(t+s) \\rangle = \\delta(s)$, \nand $\\langle \\eta_j(t) \\eta_k(t+s) \\rangle = c_{jk} \\delta(s)$. We set $c_{jk}$ to:\n\\begin{eqnarray} c_{jk} = \\left\\{\n\\begin{array}{ll}\n 0, & \\hbox{if }j\\in\\{1,2,3\\}; k\\in\\{4,5,6\\} \\\\\n 1, & \\hbox{if } j=k \\\\\n c_{OB} & \\hbox{if }j\\neq k; j,k\\in\\{1,2,3\\} \\\\\n c_{PC} & \\hbox{if }j\\neq k; j,k\\in\\{4,5,6\\} \\\\\n\\end{array} \\right.\n\\end{eqnarray}\nThe parameters $\\mu_j$ and $\\sigma_j$ are constants that give the input mean and input standard deviation, respectively.\nWithin a particular region (OB or PC), all three cells receive correlated background noisy input, but there is {\\bf no} correlated background input provided to both PC and OB cells. This is justified \nby the experimental data (see Fig S9 in \\nameref{S2_file}); average pairwise OB-to-PC correlations are all relatively small, and in particular, less than pairwise correlations \\textit{within} the OB and PC. Furthermore, \nanatomically there are no known common inputs to both regions that are active at the same time. \n\nWe also set the background correlations to be higher in PC than in OB: i.e., \n$$ c_{PC} > c_{OB} . $$\nThis is justified in part by our array recordings, where correlated local field potential fluctuations are larger in PC than in OB. \nFurthermore, one source of background correlation is global synchronous activity; Murakami et al.~\\cite{murakami05} has demonstrated that state changes \n(i.e., slow or fast waves as measured by EEG) strongly affect odorant responses in piriform cortex but only minimally effect olfactory bulb cells. \nFinally, PC has more recurrent activity than the olfactory bulb; this\ncould lead to more recurrent common input, if not cancelled by inhibition~\\cite{renart10}.\n\n\\begin{figure}[!h]\n\\centering\n \\includegraphics[width=4.25in]{Fig4-eps-converted-to.pdf}\n\\caption{{\\bf Minimal firing rate model to analyze important synaptic conductance strengths.}\nA firing rate model (Wilson-Cowan) with background correlated noisy inputs is analyzed to derive principles relating these network attributes (see Eq~\\ref{eqn:gen_WC_pop} and {\\bf Materials and Methods} section). \nThis model only incorporates some of the anatomical connections that are known to exist {\\it and} are important for modulation of statistics of firing (see main text for further discussion). \nEach neuron within a region (OB or PC) receives correlated background noisy input with $c_{OB} gEO$, once the covariance constraints are omitted.\n\n\\subsection*{Generality of Firing Rate Model Predictions}\n\nIn general, we should expect that if we change the wiring diagram of our simple firing rate model (Fig~\\ref{fig4}), then the same experimental constraints might result in different predictions. \nThis could be a concern since our simple firing rate model is lacking many connections and cell types that exist in the real olfactory system~\\cite{oswald12}. \nHowever, we tested one alternative wiring diagram with different neurons receiving stimulus input, no E-to-I connections within OB, and no E-to-I connections within PC. \nOur predictions were robust to these changes. Second and most importantly, we tested whether our predictions held in a larger network of leaky integrate-and-fire neurons. \nThis spiking network model also had more realistic network connectivity, more closely mimicking known anatomy of real olfactory systems. \n\nThe following highlight the differences between the spiking model and the firing rate model:\n\\begin{itemize}\n\t\\item Include E-to-E connections from OB to PC (lateral olfactory tract). Also include strong E-to-I drive within PC because input from OB results in balanced \n\texcitation and inhibition in PC~\\cite{large16};\n\t\\item Remove the E-to-I connections from OB to PC ($gEO$ in the firing rate model) so that the recurrent activity in PC is driven by E inputs along the lateral olfactory tract;\n\t\\item Remove the direct sensory input to I cells in OB since granule cells do not receive direct sensory input~\\cite{oswald12};\n\t\\item Include substantial recurrent E-to-E connections within PC (see Table~\\ref{table:lif_parms} for strength relative to other connections). \n\\end{itemize}\n\nThe parameter $gEO$ will now refer to the strength of E-to-E connections, rather than E-to-I connections, from OB to PC. The next two sections demonstrate that our predictions hold for this LIF network model (also see \\nameref{S3_file}).\n\n\n\\subsection*{Results are Validated in a Spiking LIF Network}\n\nHere we show that a general leaky integrate-and-fire ({\\bf LIF}) spiking neuron model of the coupled OB-PC system can satisfy all 12 data constraints. \nRather than try to model the exact underlying physiological details of the olfactory bulb or anterior piriform cortex, \nour goal is to demonstrate that the results from the minimal firing rate model can be used as a guiding principle in a more realistic coupled \nspiking model with conductance-based synaptic input. The LIF model does not contain all of the attributes and cell types of the olfactory system, but is a plausible model that contains: \ni) more granule than M\/T cells in OB (a 4-to-1 ratio, comparable to the 3-to-1 ratio used in ~\\cite{grabska17}); \nii) E-to-E connections from OB to PC that drive the entire network within PC; iii) E-to-I (granule cell) feedback from PC to OB; iv) lack of \nsensory input to granule I cells in OB. \n\nWe also show that the minimal firing rate model results can be applied to a generic cortical-cortical coupled population (see ~\\nameref{S3_file}).\n\nWe set the four conductance strength values to:\n\\begin{eqnarray}\n\tgIO &=& 7\t\t\\nonumber \\\\\n\tgEO &=& \t10\t\\nonumber \\\\\n\tgIP &=& \t20\t\\nonumber \\\\\n\tgEP &=& \t15\t; \\label{def_gs_lif}\n\\end{eqnarray}\nSee Fig~\\ref{fig6} or Eq~\\ref{ob_lif}--\\ref{pc_lif} for exact definitions of $gXY$; these conductance strength values are dimensionless scale factors. \nThese values were selected to satisfy the relationships derived from the analysis of the rate model (see Fig~\\ref{fig4}). \nIn contrast to the minimal firing rate model, here the conductance values are all necessarily positive; an inhibitory reversal potential is used to capture the hyperpolarization that occurs upon receiving synaptic input. \n\nWith the conductance strengths in Eq~\\ref{def_gs_lif}, and other standard parameter values (see Table~\\ref{table:lif_parms}) in a typical LIF model, we were able to easily satisfy all 12 constraints: see \nTable~\\ref{table:frate_lif} and Fig~\\ref{fig6}. \n\n\\begin{figure}[!h]\n \\includegraphics[width=\\columnwidth]{Fig6-eps-converted-to.pdf}\n\\caption{{\\bf Detailed spiking LIF model confirms the results from analytic rate model.}\nSchematic of the LIF model with 2 sets of recurrently coupled E and I cells. There are 12 types of synaptic connections. \n(A) Pairwise correlations in PC, spontaneous vs. evoked: $\\rho_{PC}^{Sp}>\\rho_{PC}^{Ev}$. \n(B) Variability (Fano factor) in PC, spontaneous vs evoked: $FF_{PC}^{Sp}>FF_{PC}^{Ev}$. \n(C) Correlations in the spontaneous state, PC vs. OB: $\\rho_{PC}^{Sp}>\\rho_{OB}^{Sp}$.\n(D) Correlations in the evoked state, PC vs. OB: $\\rho_{PC}^{Ev}<\\rho_{OB}^{Ev}$. \n(E) Variability (Fano factor) in the spontaneous state, PC vs. OB: $FF_{PC}^{Sp}>FF_{OB}^{Sp}$. \n(F) Variability (Fano factor) in the evoked state, PC vs. OB: $\\text{Var}_{PC}^{Ev}<\\text{Var}_{OB}^{Ev}$ in evoked state. \n(G) Covariances in the evoked state, PC vs. OB: $\\text{Cov}_{PC}^{Ev}<\\text{Cov}_{OB}^{Ev}$.\n(H) Variability (spike count variance) in OB, spontaneous vs. evoked: $\\text{Var}_{OB}^{Sp} < \\text{Var}_{OB}^{Ev}$. \nThe curves show the average statistics over all $N_{OB\/PC}$ cells or over a large random sample of all possible pairs. \nSee {\\bf Materials and Methods} for model details, and Table~\\ref{table:lif_parms} and Eq~\\ref{def_gs_lif} for parameter values.\n}\n\\label{fig6}\n\\end{figure}\n\n\\begin{table}[!ht]\n\\centering\n\\caption{ {\\bf Population firing rate statistics from an LIF model of the OB--PC pathway. }}\n\\label{table:frate_lif}\n\\begin{tabular}{|c+c|c|}\n\\hline\n\\multicolumn{1}{|l|}{} & Mean Firing Rate (Hz) & Std. Dev. (Hz) \\\\ \\thickhline\n$\\nu_{OB}^{Sp}$ & 5.5 & 4.6 \\\\\n$\\nu_{OB}^{Ev}$ & 6.2 & 4.8 \\\\ \\hline\n$\\nu_{PC}^{Sp}$ & 2.1 & 2.6 \\\\\n$\\nu_{PC}^{Ev}$ & 4.1 & 5.8 \\\\ \\hline\n\\end{tabular}\n\\begin{flushleft} See {\\bf Materials and Methods} for model details, and Table~\\ref{table:lif_parms} and Eq~\\ref{def_gs_lif} for parameter values. The mean and standard deviations are across the heterogeneous population.\n\\end{flushleft}\n\\end{table}\n\nWhile the firing rates in the LIF network (Table~\\ref{table:frate_lif}) do not \\textit{quantitatively} match with the firing rates from the experimental data, a few \\textit{qualitative} trends are apparent: (i) the ratio of mean \nspontaneous to evoked firing rates are similar to that observed in experimental data, for both OB and PC, (ii) the same is true of the standard deviation, (iii) the ratio of the mean OB firing rate to PC firing rate is \nsimilar to what is observed in the experimental data, in both spontaneous and evoked states. Therefore, the LIF network captures the mean firing rates reasonably well. \n\n\nOne difference between the LIF spiking network and the minimal firing rate model is that in the evoked state, mean background input to \\textit{both} the OB and PC cells is increased, compared to the \nspontaneous state (recall that in the minimal firing rate model, only the mean input to the OB cells increased in the evoked state; this ensured that stimulus-induced changes in \nPC were due to network activity). When the mean input to the PC cells is the same in the spontaneous and evoked states, \n10 of the 12 constraints were satisfied -- the exception was the correlation of PC in the evoked state, which decreased but is still larger than the spontaneous correlation \n(see Fig. S13 in \\nameref{S2_file}). The reason is that as firing rates increase, the OB spiking is more variable and the synaptic input from OB to PC is noisier, so the input to PC activity is diffused. \n\nTo capture the final two constraints, we allowed mean input drive to PC to increase in the evoked state. \nThis has also been used in previous theoretical studies to achieve stimulus-induced decreases in spiking variability and co-variability~\\cite{litwin_nn_12}. \nChurchland et al.~\\cite{churchland10} used an extra source of variability in the spike generating mechanism, a doubly stochastic model, which was \nsimply removed with stimulus onset. \nThus, the mechanism we employ (increased mean input with lower input variability) is consistent with other studies that analyzed stimulus-induced changes in variability~\\cite{churchland10,litwin_nn_12}.\n\n\\subsubsection*{Results of Violating Derived Relationships Between Conductance Strengths}\n\nWhat happens in the full LIF spiking network when the derived relationships between the conductance strengths are violated? Since the minimal firing rate model is different than the \ndetailed spiking model in many ways, we do not expect the relationships between the conductance strengths to hold precisely. However, \nthe minimal firing rate model is still useful in providing intuition for what would otherwise be a complicated network \nwith a high-dimensional parameter space.\nWe now demonstrate that when the relationships derived in firing rate model are violated, \na subset of the constraints in the experimental data (Table~\\ref{table:constr}) will no longer be satisfied in the large spiking network.\n\nBecause our network is heterogeneous, our ability to subsample cell pairs is limited, relative to a homogeneous network of the same size. Also, computation for even a single \nparameter set in the spiking network require enormous computing resources. \nThus, we cannot exhaustively explore the parameter space; indeed, the purpose of the reduction method of the firing rate model is to probe large dimensions quickly. Instead, we perform three tests that violate the firing rate model results:\n\\begin{enumerate}\n\t\\item Make $gIO > g IP$ by setting $gIO=20$ and $gIP=7$.\n\t\\item Make $gEO > gEP$ by setting $gEO=15$ and $gEP=1$\n\t\\item Make $gEP$ and $gIP$ relatively smaller by setting $gEP=10$ and $gIP=10$\n\\end{enumerate}\nThe original values (used in Fig~\\ref{fig6}) for these parameters were given in Eq~\\ref{def_gs_lif}.\n\nThe result of Test 1 is that 2 of the 12 constraints are violated (see Fig S14 in \\nameref{S2_file}); most importantly stimulus-induced decorrelation of the PC cells, which is particularly important in the context of coding, was not present. \nIn addition,\nthe evoked PC correlation is larger than evoked OB correlation,\nviolating another constraint.\n\nThe result of Test 2 is that 3 of the 12 constraints are violated (see Fig S15 in \\nameref{S2_file}). \nThe evoked PC correlation is larger than evoked OB correlation, and \nboth the variance and covariance in PC are larger than the corresponding quantities in OB in the evoked state, which is not consistent with our data. \n\nThe result of Test 3 is that 3 of the 12 constraints are violated: they are the same constraints that are violated in Test 2, despite quantitative differences in the statistics (see Fig S16 in \\nameref{S2_file}). \nThe stimulus-induced decorrelation of the PC cells does not hold for small windows, but this is also observed in our data (Fig~\\ref{fig2}A), so we do not formally count this as a clear violation of data constraints. \nHowever, Test 1 and Test 3 show that strong PC inhibition is key for stimulus-induced decorrelation~\\cite{doiron16,diesmann12,middleton12,LMD_whisker_12,litwin12,litwin11,renart10}.\n \n\\section*{Discussion}\n\nAs electrophysiological recording technology advances, there will be more datasets with simultaneous recordings of neurons, spanning larger regions of the nervous system. \nSuch networks are inherently high-dimensional, making mechanistic analyses generally intractable without fast and reasonably accurate approximation methods. \nWe have developed a computational reduction method for a multi-population firing rate model~\\cite{wilsoncowan1} \n that enables analysis of the spiking statistics. Our work specifically enables theoretical characterizations of an important, yet hard-to-measure quantity -- synaptic connection strength -- using easy-to-measure \n spiking statistics. The method is computationally efficient, is validated with Monte Carlo simulations of spiking neural networks, and can provide insight into network structure. \n\nWe applied our computational methods to \nsimultaneous dual-array recordings in two distinct regions of the olfactory system: the olfactory bulb (OB) and anterior piriform cortex (PC). \nOur unique experimental dataset enables a detailed analysis of the first- and second-order spike count statistics in two activity states,\nand a comparison of how these \nstatistics are related between OB and PC cells. We found twelve (12) consistent trends that held across four odors in the dataset (Table~\\ref{table:constr}), and sought to identify \nwhat neural network attributes would account for these trends.\nWe focused on four important network attributes, specifically the conductance strengths in the following connections: feedforward inhibition within OB and within PC, excitatory projections from OB to PC neurons, and finally \nexcitatory projections from PC to OB. Our reduced firing rate model predicts several relationships that are then verified with a more detailed spiking network model, specifically: \ni) inhibition within the OB has to be weaker than the inhibition in PC, ii) excitation from PC to OB is generally stronger than excitation from OB to PC, \niii) connections that originate within PC have to relatively strong compared to connections that originate within OB.\nThese results make a strong prediction that to the best of our knowledge is new and might be testable with simultaneous patch-clamp recordings. \n\t\nIn principle our theory could be used to study the structure of other network features such as background correlation, noise level, transfer function, etc.. \nIt is straightforward mathematically to incorporate other desired neural attributes (with the caveat of perhaps increasing the overall number of equations and terms in the approximations) without changing the basic structure of the framework. \nHere we have focused on the role of the strength of synaptic coupling; \nof course, other neural attributes can affect spike statistics (in particular, spike count correlation~\\cite{cohen11,doiron16}), some of which can conceivably change with stimuli. \nSpike count correlations can depend on intrinsic neural properties~\\cite{hong12,marella08,abouzeid09,barreiro10,barreiro12,ocker14}, network architecture~\\cite{rosenbaum10,litwin_nn_12,rosenbaum17} \nand synaptic inputs~\\cite{renart10,diesmann12,litwin11,litwin12,middleton12,LMD_whisker_12} \n(or combinations of these~\\cite{ostojic09,ly_ermentrout_09,trousdale12,BarreiroLy_RecrCorr_17}), plasticity~\\cite{rosenbaum13}, as well as top-down mechanisms~\\cite{mitchell09,cohen09,ruff14}. Thus, correlation \nmodulation is a rich and deep field of study, and we do not presume our result is the only plausible explanation for spike statistics modulation.\n\nAlthough the minimal firing rate model did not include certain anatomical connections that are known to exist (e.g., recurrent excitation in the PC), \nthe model is meant for deriving qualitative principles rather than precise quantitative modeling of the \npathway. We based our simplifications on insights from recent experimental work: \nrecent slice physiology work \nhas shown that within PC, recurrent activity is dominated by inhibition~\\cite{large16}; previous \nwork has also shown that inhibitory synaptic events are much more common (than excitatory synaptic events) in PC and are much easier to elicit~\\cite{poo09}. Thus, the connection from excitatory OB cells to inhibitory PC cells ($gEO$ in Fig~\\ref{fig4}) \nshould be thought of as the net effect of these connections along the lateral olfactory tract. \nOther theoretical analyses of effective feedforward inhibitory networks have also neglected anatomical E-to-E connections~\\cite{middleton12,LMD_whisker_12}. Furthermore, this minimal model \nwas validated with a more realistic, recurrently coupled spiking network, which did include within-region excitatory connections (see Fig~\\ref{fig6} and Fig S14--S16 in \\nameref{S2_file}, as well as \\nameref{S3_file}).\n\n\nWe have only focused on first- and second-order firing statistics, even though in principle other, higher-order statistics may be important~\\cite{ohiorhenuan10,trousdale13,jovanovic16}. \nIf downstream neurons use a linear decoding scheme,\n then first- and second-order spiking statistics are sufficient in quantitative measures of neural coding~\\cite{kaybook,dayan2001theoretical}. \nIt is currently unknown whether downstream neurons decode olfactory signals with a nonlinear decoder, but there is evidence in other sensory systems that second-order statistics are sufficient~\\cite{kohn16}. \nRecent work has shown conflicting results for coding in olfactory bulb; one study found that decoding an odor in the presence of other odors might be more efficient using nonlinear decoding~\\cite{grabska17}, but another has shown that linear decoding is still plausible~\\cite{mathis16}. \n\nA second reason to neglect higher-order statistics is suggested by Fig. 5, where we show how the various data constraints narrow the scope of plausible models. Here, we saw that even with first and second- order statistics, only 1\\% of the parameter sets satisfy the data constraints; including more constraints would limit the space further. In order to usefully include higher-order constraints, we would need to use a more detailed model and\/or larger \nparameter spaces.\n\nAs a test case for our method, we used recordings from anesthetized animals. The absence of breathing in tracheotomized rats \nin these experiments is only an approximation to olfactory processing in awake animals. \nHowever, there is a benefit to tracheotomized animals: the \ncomplex temporal firing patterns are removed, \nso that firing statistics are closer to stationarity. \nIn principle, we can incorporate breathing dynamics into our framework by including an oscillatory forcing term in Eq~\\ref{eqn:gen_WC_pop}; this will be the subject of future work. \nIn support of this simplification, we note that there is evidence that in the anterior piriform cortex, spike count --- rather than the timing --- is most consequential for odor discrimination~\\cite{miura12}. \nHowever, other studies have reported that timing of the stimuli in the olfactory bulb is important:~\\cite{cury10,gschwend12,grabska17} \nshowed decoding performance is best at the onset of odors in mammals and worsens as time \nproceeds, whereas~\\cite{friedrich01} found that decoding performance improved with time in zebrafish. These important issues are beyond the scope of this current study.\n\n\\subsection*{Relationship to Other Reduction Methods}\n\nIn computing statistics for the minimal firing rate model, we only considered equilibrium firing statistics, in which a set of stationary statistics can be solved self-consistently. \nMore sophisticated methods might be used to address oscillatory firing statistics (see~\\cite{nlc_15} where the adaptive quadratic integrate-and-fire model was successfully analyzed with a reduced method); capturing the firing statistics in these other regimes is a potentially interesting direction of research. The limitation to steady-state statistics is not unique, but is shared by other approximation methods. \nSome methods are known to have issues when the system bifurcates~\\cite{buice07,buice10} because truncation methods can fail~\\cite{ly_tranchina_07}. \n\nSeveral authors have proposed procedures to derive population-averaged first- and second-order \nspiking statistics from the dynamics of single neurons. The microscopic dynamics in question may be given by a master equation ~\\cite{buice07,bressloff09,buice10,touboul11,bressloff15}, a generalized linear model \\cite{toyoizumi09,ocker2017}, or the theta model~\\cite{BC_JSM_2013,BC_PLOSCB_2013}. \n(Other authors have derived rate equations at the single-neuron level, by starting with a spike response model \\cite{aviel06} or by taking the limit of slow synapses~\\cite{ermentrout94}.) While we would ideally use a similar procedure to derive our rate equations, none of the approaches we note here is yet adapted to deal with our setting, a heterogeneous network of leaky integrate-and-fire neurons. Instead, we focused here on perturbing from a background state in which several populations \n(each population modeled by a single equation) \nreceive correlated background input but are otherwise uncoupled. \nThis allows us to narrow our focus to how spike count co-variability from common input is modulated by recurrent connections. \n\nWe also note that other recent works have used firing rate models to explain observed patterns of correlated spiking activity in response to stimuli. \nRosenbaum et al.~\\cite{rosenbaum17} have studied the spatial structure of correlation in primate visual cortex with balanced networks~\\cite{van96}; \nKeane \\& Gong~\\cite{keaneGong15} studied wave propagation in balanced network models. \n \n\n\n\n\n\\section*{Conclusion}\n\nDesigning a spiking neural network model of two different regions that satisfies the many experimental data constraints we have outlined is a difficult problem that would often be addressed \nvia undirected simulations. We have shown that \nsystematic analysis of a minimal firing rate model can yield valuable insights into the relative strength of unmeasured network connections. Furthermore, these insights are transferable to a more complex, physiologically realistic spiking model of the OB--PC pathway. \nIndeed, incorporating the relative relationships of the four conductance strengths resulted in spiking network models that satisfied the constraints. \nStrongly violating the relative relationships of these conductance strengths led to multiple violations of the data constraints. Because our approach can be extended to other network features, we are hopeful that\nthe general approach we have developed -- using easy-to-measure quantities to predict hard-to-measure interactions -- will be valuable in future investigations into how whole-brain function emerges from interactions among its constituent components.\n\n\n\n\\section*{Materials and Methods}\n\n\\subsection*{Electrophysiological Recordings}\n\n \n\n{\\bf Subjects.} All procedures were carried out in accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health and approved by \nUniversity of Arkansas Institutional Animal Care and Use Committee (protocol \\#14049). Experimental data was obtained from one adult male rat (289\\,g ; \\textit{Rattus Norvegicus}, Sprague-Dawley \noutbred, Harlan Laboratories, TX, USA) housed in an environment of controlled humidity (60\\%) and temperature (23$^{\\circ}$C) with 12\\,h light-dark cycles. The experiments were performed in the light phase. \n\n\\vspace{.1in}\n\\hspace{-.25in} {\\bf Anesthesia.} Anesthesia was induced with isoflurane inhalation and maintained with urethane (1.5\\,g\/kg body weight ({\\bf bw}) dissolved in saline, intraperitoneal injection ({\\bf ip})). \nDexamethasone (2\\,mg\/kg bw, ip) and atropine sulphate (0.4\\,mg\/kg bw, ip) were administered before performing surgical procedures.\n\n\\vspace{.1in}\n\\hspace{-.25in} {\\bf Double tracheotomy surgery.} To facilitate ortho- and retronasal delivery of the odorants a double tracheotomy surgery was performed as described previously \\cite{gautam12}. This \nallowed for the rat to sniff artificially while breathing naturally through the trachea bypassing the nose. A Teflon tube (OD 2.1\\,mm, upper tracheotomy tube) was inserted 10\\,mm into the nasopharynx through the rostral \nend of the tracheal cut. Another Teflon tube (OD 2.3\\,mm, lower tracheotomy tube) was inserted in to the caudal end of the tracheal cut to allow breathing. Both tubes were fixed and sealed to the tissues using surgical \nthread. Local anesthetic (2\\% Lidocaine) was applied at all pressure points and incisions. \nThroughout the surgery and electrophysiological recordings rats' core body temperature was maintained at 37$^{\\circ}$C with a thermostatically controlled heating pad.\n\n\\vspace{.1in}\n\\hspace{-.25in} {\\bf Craniotomy surgery.} Subsequently, a craniotomy surgery was performed on the dorsal surface of the skull at two locations, one over the right Olfactory Bulb \n(2\\,mm $\\times$ 2\\,mm, centered 8.5\\,mm rostral to bregma and 1.5\\,mm lateral from midline) and the other over the right anterior Pyriform Cortex (2\\,mm $\\times$ 2\\,mm, centered 1.5\\.mm caudal to bregma and \n5.5\\,mm lateral from midline).\n\n\\vspace{.1in}\n\\hspace{-.25in} {\\bf Presentation of ortho- and retronasal odorants.} The bidirectional artificial sniffing paradigm previously used for the presentation of ortho- and retronasal odorants \\cite{gautam12} \nwere slightly modified such that instead of a nose mask a Teflon tube was inserted into the right nostril and the left nostril was sealed by suturing. The upper tracheotomy tube inserted into the nasopharynx was used \nto deliver odor stimuli retronasally (Fig~\\ref{fig1}. We used two different odorants, Hexanal ({\\bf Hexa}) and Ethyl Butyrate ({\\bf EB}) by both ortho- and retronasal routes, there by constituting 4 \ndifferent odor stimuli. Each trial consisted of 10 one-second pulse presentations of an odor with 30 second interval in between two pulses, and 2-3 min in between two trials.\n\n\\vspace{.1in}\n\\hspace{-.25in} {\\bf Electrophysiology.} Extracellular voltage was recorded simultaneously from OB and aPC using two different sets of 32-channel microelectrode arrays ({\\bf MEAs}).\n (OB: A4x2tet, 4 shanks x 2 iridium tetrodes per shank, inserted 400 $\\mu$m deep from dorsal surface; aPC: Buzsaki 32L, 4 shanks x 8 iridium electrode sites per shank, \n 6.5\\,mm deep from dorsal surface; NeuroNexus, MI, USA). Voltages were measured with respect to an AgCl ground pellet placed in the saline-soaked gel foams covering the exposed brain surface around the inserted \n MEAs. Voltages were digitized with 30\\,kHz sample rate as described previously \\cite{gautam15} using Cereplex + Cerebus, Blackrock Microsystems (UT, USA). \n\nRecordings were filtered between 300 and 3000\\,Hz and semiautomatic spike sorting was performed using Klustakwik software, which is optimized for the types of electode arrays used here \\cite{rossant16}. \nAfter automatic sorting, each unit was visually inspected to ensure quality of sorting. \n\n\\subsection*{Data processing}\n\nAfter the array recordings were spike sorted to identify activity from distinct cells, we further processed the data as follows:\n\\begin{itemize}\n\\item We computed average firing rate for each cell, where the average was taken over all trials and over the entire trial length (i.e., not distinguishing between spontaneous and evoked periods); units with firing rates below 0.008\\,Hz and above 49\\,Hz were excluded. \n\t\\item When spike times from the same unit were within 0.1\\,ms of each other, only the first (smaller) of the spike time was used and the subsequent spike times were discarded\n\\end{itemize}\n\nWe divided each 30\\,s trial into two segments, representing the odor-{\\bf evoked} state (first 2 seconds) and the {\\bf spontaneous} state (remaining 28 seconds). \nIn each state, we are interested in the random spike counts of the population in a particular window of size $T_{win}$. For a particular time window, \nthe $j^{th}$ neuron has a spike count instance $N_j$ in the time interval $[t,t+T_{win})$:\n\\begin{equation}\\label{n_cnt}\n\tN_j = \\sum_k \\int_{t}^{t+T_{win}} \\delta(t-t_k)\\,dt\n\\end{equation}\n\nThe spike count correlation between cells $j$ and $k$ is given by:\n\\begin{equation}\\label{rho_defn}\n\t\\rho_{T} = \\frac{ \\text{Cov}(N_j,N_k) }{ \\sqrt{ \\text{Var}(N_j) \\text{Var}(N_k) } },\n\\end{equation}\nwhere the {\\it covariance} of spike counts is: \n\\begin{equation}\\label{cov_defn}\n\t\\text{Cov}(N_j,N_k) = \\frac{1}{n-1} \\sum \\left( N_j - \\mu(N_j) \\right) \\left( N_k - \\mu(N_k) \\right).\n\\end{equation}\n Here $n$ is the total number of observations of $N_{j\/k}$, and $\\mu(N_j):=\\frac{1}{n}\\sum N_j$ is the mean spike count across $T_{win}$-windows and trials. \n The correlation $\\rho_{T}$ is a normalized measure of the the trial-to-trial variability (i.e., noise correlation), satisfying $\\rho_{T}\\in[-1,1]$; it is also referred to as the {\\it Pearson's correlation coefficient}. \nFor each cell pair, the covariance $\\text{Cov}(N_j,N_k)$ and variance $\\text{Var}(N_j)$ are empirically calculated by averaging across different time windows within a trial {\\it and} different trials. \n\nA standard measure of \nvariability is the Fano Factor of spike counts, which is the variance scaled by the mean:\n\\begin{equation}\\label{FF_defn}\n\tFF_k = \\frac{\\text{Var}(N_k)}{\\mu(N_k)}.\n\\end{equation}\n\nIn principle, any of the statistics defined here might depend on the time $t$ as well as time window size $T_{win}$; here, we assume that $\\text{Var}$, $\\text{Cov}$, $FF$, and $\\rho_T$ are stationary in time, and thus separate time windows based only on whether they occur in the evoked (first 2 seconds) \nor spontaneous (last 28 seconds) state. \n\nEach trial of experimental \ndata has many time windows\\footnote{an exception is when $T_{win}=2\\,$s; in the evoked state, there is only 1 window per trial}; the exact number depends on the state, the value of $T_{win}$, and whether \ndisjoint or overlapping windows are used. In this paper we use overlapping windows by half the length of $T_{win}$\\footnote{e.g. if the trial length is 2\\,s and $T_{win}=1\\,$s, then there are 3 total windows per trial: [0\\,s, 1\\,s], [0.5\\,s, 1.5\\,s], and [1\\,s, 2\\,s]} \nto calculate the spiking statistics. The results are qualitatively similar for disjoint windows and importantly the relationships\/constraints are the same with disjoint windows. We limit the size of $T_{win}\\leq 2\\,$s \nbecause this is the maximum duration of the evoked state, within each trial.\n\n\nThe average spike count $\\mu(N_j)$ of the $j^{th}$ neuron with a particular time window $T_{win}$ is related to the average firing rate $\\nu_j$ of that neuron:\n\\begin{equation}\\label{frate_defn}\n\t\\nu_j := \\frac{\\mu(N_j)}{T_{win}}\n\\end{equation}\n\n\n\\subsection*{Firing Rate Model}\n\nRecall that the activity in each representative cell is modeled by:\n\\begin{equation}\\label{frate_firsteqn}\n\t\\tau \\frac{d x_j}{dt} = -x_j + \\mu_j + \\sigma_j \\eta_j + \\sum_k g_{jk} F(x_k) \n\\end{equation}\nwhere $F(x_k)$ is a transfer function mapping activity to firing rate. Thus, the firing rate is:\n\\begin{equation}\n\t\\nu_j = F(x_j). \n\\end{equation}\n\nThe index of each region is denoted as follows: $j\\in\\{1,2,3\\}$ for the 3 OB cells, and $j\\in\\{4,5,6\\}$ for the 3 PC cells, with $j=1$ as the inhibitory granule OB cell and $j=4$ as the inhibitory PC cell (see Fig~\\ref{fig4}). In \nthis paper, we set $\\sigma_1=\\sigma_2=\\sigma_3=\\sigma_{OB}$ and $\\sigma_4=\\sigma_5=\\sigma_6=\\sigma_{PC}$ (see Table~\\ref{table:rateMod_parms}). \n\n\\begin{table}[!ht]\n\\caption{ {\\bf Parameters of the rate model (Eq~\\ref{eqn:gen_WC_pop}). The only difference between the spontaneous and evoked states, is that the mean input to OB increased in the evoked state. \nWe set $\\tau=1$ throughout.}}\n\\label{table:rateMod_parms}\n\\begin{tabular}{l|c|l|cc|}\n\\cline{2-5}\n & \\multicolumn{1}{l|}{\\textbf{Parameter}} & \\multicolumn{1}{c|}{\\textbf{Definition}} & \\textbf{Spontaneous Value} & \\multicolumn{1}{l|}{\\textbf{Evoked Value}} \\\\ \\hline\n\\multicolumn{1}{|l|}{\\multirow{5}{*}{Olfactory Bulb}} & $\\mu_1$ & \\multicolumn{1}{c|}{Mean Input} & 13\/60 & \\textbf{26\/60} \\\\\n\\multicolumn{1}{|l|}{} & $\\mu_2$ & \\multicolumn{1}{c|}{} & 9\/60 & \\textbf{18\/60} \\\\\n\\multicolumn{1}{|l|}{} & $\\mu_3$ & & 7\/60 & \\textbf{14\/60} \\\\\n\\multicolumn{1}{|l|}{} & $\\sigma_{OB}$ & Background Noise Level & 1.4 & 1.4 \\\\\n\\multicolumn{1}{|l|}{} & $c_{OB}$ & \\multicolumn{1}{c|}{OB Background Correlation} & 0.3 & 0.3 \\\\ \\hline\n\\multicolumn{1}{|l|}{\\multirow{5}{*}{Piriform Cortex}} & $\\mu_4$ & \\multicolumn{1}{c|}{Mean Input} & 9\/60 & 9\/60 \\\\\n\\multicolumn{1}{|l|}{} & $\\mu_5$ & & 5\/60 & 5\/60 \\\\\n\\multicolumn{1}{|l|}{} & $\\mu_6$ & & 3\/60 & 3\/60 \\\\\n\\multicolumn{1}{|l|}{} & $\\sigma_{PC}$ & Background Noise Level & 2 & 2 \\\\\n\\multicolumn{1}{|l|}{} & $c_{PC}$ & PC Background Correlation & 0.35 & 0.35 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nIn the absence of coupling (i.e. $g_{jk} = 0$), any pair of activity variables, $(x_j,x_k)$, are bivariate normally distributed because the equations:\n\\begin{eqnarray}\n\t\\tau \\frac{d x_j}{dt} & = & -x_j + \\mu_j + \\sigma_j \\left( \\sqrt{1-c_{jk}}\\xi_j(t) + \\sqrt{c_{jk}} \\xi_c(t) \\right) \\\\\n\t\\tau \\frac{d x_k}{dt} & = & -x_k + \\mu_k + \\sigma_k \\left( \\sqrt{1-c_{jk}}\\xi_k(t) + \\sqrt{c_{jk}} \\xi_c(t) \\right) \n\\end{eqnarray}\ndescribe a multi-dimensional Ornstein-Uhlenbeck process~\\cite{gardiner}. Note that we have re-written $\\eta_{j\/k}(t)$ as sums of independent white noise processes $\\xi(t)$, which is always possible for Gaussian white noise. \nSince $x_j(t) = \\frac{1}{\\tau}\\int_0^t e^{-(t-u)\/\\tau} \\Big[ \\mu_j + \\sigma_j\\eta_j(u) \\Big]\\,du$, we calculate marginal statistics as follows: \n\\begin{equation}\n\t\\mu(j) \\equiv \\langle x_j \\rangle = \\mu_j + 0 \\label{eqn:mu_uncoupled}\n\\end{equation}\n\n\\begin{eqnarray*}\n\t\\sigma^2(j) & \\equiv & \\langle (x_j - \\mu(j) )^2 \\rangle \\\\\n\t & = & \\left\\langle \\frac{\\sigma^2_j}{\\tau^2} \\int_0^t \\int_0^t e^{-(t-u)\/\\tau} \\eta_j(u) e^{-(t-v)\/\\tau} \\eta_j(v) \\,du\\,dv \\right\\rangle \\\\\n\t & = &\\frac{\\sigma^2_j}{\\tau^2} \\lim_{t\\to\\infty} \\int^t_0 e^{-2(t-u)\/\\tau} \\,du=\\frac{\\sigma^2_j}{2\\tau}\n\\end{eqnarray*}\n\nA similar calculation shows that in general we have:\n\\begin{equation}\n\t\\text{Cov}(j,k) = \\frac{c_{jk}}{2\\tau} \\sigma_j \\sigma_k \\label{eqn:cov_uncoupled}\n\\end{equation}\n\nThus, $(x_j,x_k)\\sim \\mathcal{N}\\left( \\left(\\begin{smallmatrix}\\mu_j \\\\ \\mu_k \\end{smallmatrix}\\right) , \\frac{1}{2\\tau} \\left(\\begin{smallmatrix} \\sigma^2_j & \\sigma_j\\sigma_k c_{jk} \\\\ \\sigma_j\\sigma_k c_{jk} & \\sigma^2_k \\end{smallmatrix}\\right) \\right)$.\n\nTo simplify notation, we define:\n\\begin{eqnarray}\n\t\\rho_{SN}(y) &:=& \\frac{1}{\\sqrt{2\\pi}} e^{-y^2\/2}, \\hbox{ the standard normal PDF} \\\\\n\t\\rho_{2D}(y_1,y_2) &:=& \\frac{1}{2\\pi\\sqrt{1-c_{jk}^2}} \\exp\\Big( -\\frac{1}{2}\\vec{y}^T \\left(\\begin{smallmatrix} 1 & c_{jk} \\\\ c_{jk} & 1 \\end{smallmatrix}\\right)^{-1} \\vec{y} \\Big), \\hbox{ bivariate standard normal} \\nonumber \\\\\n\t\t\t\t\t & &\n\\end{eqnarray}\nWith coupling, an exact expression for a joint distribution for $(x_1, x_2, x_3, x_4, x_5, x_6)$ is not explicitly known. However, we can estimate this distribution (and any derived statistics, such as means and variances) using Monte Carlo simulations. All \nMonte Carlo simulations of the six (6) coupled SDEs were performed using a time step of 0.01 with a standard Euler-Maruyama method, for a time of 500 units \n(arbitrary, but relative to the characteristic time scale $\\tau=1$) for each of the 3000 realizations. \nThe activity $x_j$ was sampled at each time step after an equilibration period. \n\nFurthermore, we can approximate moments of the joint distribution under the assumption of weak coupling, as described in the next section.\n\n\\subsection*{Approximation of Firing Statistics in the Firing Rate Model}\nWe will now show how to compute approximate first and second order statistics for the firing rate model \n\\textit{with coupling}; i.e., we aim to compute the mean activity $\\langle x_j \\rangle$, mean firing rate $\\langle F(x_j) \\rangle$, variance and covariances of both: $\\langle x_j x_k \\rangle$ and $\\langle F(x_j) F(x_k) \\rangle$.\nFor a simpler exposition, we have only included twelve synaptic connections; we have excluded self (autaptic) connections and E$\\to$E connections. \n\nAn equation for each statistic can be derived by first writing Eq~\\ref{frate_firsteqn} as a low-pass filter of the right-hand-side: \n\\begin{eqnarray}\nx_j(t) &= & \\frac{1}{\\tau}\\int_0^t e^{-(t-u)\/\\tau} \\Big[ \\mu_j + \\sigma_j\\eta_j(u) + \\sum_k g_{jk} F(x_k) \\Big]\\,du\n\\end{eqnarray}\nWe then take expectations, letting $t \\rightarrow \\infty$, we have:\n\\begin{eqnarray}\n\\mu(j):=\\langle x_j \\rangle & = & \\mu_j + \\langle \\sum_k g_{jk} F(x_k) \\rangle = \\mu_j + \\sum_{k} g_{jk} \\langle F(x_k) \\rangle \n\\end{eqnarray}\nWe assume the stochastic processes are ergodic, which is generally true for these types of stochastic differential equations, so that averaging over time is equivalent to averaging over the \ninvariant measure. \n\nWe will make several assumptions for computational efficiency. \nFirst, we only account for direct connections in the formulas for the first and second order statistics, assuming the terms from the indirect connections \nare either small or already accounted for in the direct connections. We further make the following assumptions to simplify the calculations:\n\\begin{align}\n\t& \\left\\langle \\int_0^t F(x_k(u))e^{-(t-u)\/\\tau}\\,du \\int_0^t F(x_k(v))e^{-(t-v)\/\\tau}\\,dv \\right\\rangle \\approx \\frac{\\tau}{2} \\mathbb{E}\\left[ F^2(x_k) \\right] \\label{ass_Fvar} \\\\\n\t& \\hbox{where } \\mathbb{E}\\left[ F^2(x_k) \\right] := \\int F^2(\\sigma(k)y+\\mu(k))\\,\\rho_{SN}(y)\\,dy \\\\\t\n\t& \\left\\langle \\int_0^t \\sigma_j\\eta_j(u)e^{-(t-u)\/\\tau}\\,du \\int_0^t F(x_k(v))e^{-(t-v)\/\\tau}\\,dv \\right\\rangle \\approx \\frac{\\tau}{2} \\mathbb{E}\\left[ N_j F(x_k) \\right], \\hbox{ if }j\\neq k \\label{ass_nzFa} \\\\\n\t& \\hbox{where } \\mathbb{E}\\left[ N_j F(x_k) \\right] := \\frac{\\sigma_j}{\\sqrt{2}} \\iint y_1 F(\\sigma(k)y_2+\\mu(k))\\,\\rho_{2D}(y_1,y_2)\\,dy_1 dy_2 \\\\\n\t& \\left\\langle \\int_0^t \\sigma_j\\eta_j(u)e^{-(t-u)\/\\tau}\\,du \\int_0^t F(x_k(v))e^{-(t-v)\/\\tau}\\,dv \\right\\rangle \\nonumber \\\\\n\t& \\approx \\frac{\\tau}{2} \\frac{\\sigma_k}{\\sqrt{2}} \\int y F(\\sigma(k)y +\\mu(k))\\,\\rho_{SN}(y)\\,dy, \\hbox{ if } j = k \\label{ass_nzFb} \\\\\n\t& \\left\\langle \\int_0^t F(x_j(u))e^{-(t-u)\/\\tau}\\,du \\int_0^t F(x_k(v))e^{-(t-v)\/\\tau}\\,dv \\right\\rangle \\approx \\frac{\\tau}{2} \\mathbb{E}\\left[ F(x_j)F(x_k) \\right] \t\t\\label{ass_Fcov} \\\\ \n\t& \\hbox{where } \\mathbb{E}\\left[ F(x_j) F(x_k) \\right] := \\nonumber \\\\\n\t & \\iint F(\\sigma(j)y_1+\\mu(j)) F(\\sigma(k)y_2+\\mu(k))\\,\\rho_{2D}(y_1,y_2)\\,dy_1 dy_2 \\label{ass_end}\n\\end{align}\nand $N_j$ denotes the random variable $\\int_0^t \\sigma_j\\eta_j(u) e^{-(t-u)\/\\tau}\\,du$, which is by itself normally distributed with mean 0 and variance $\\sigma_j^2\\tau\/2$. \n\nThe first assumption, Eq~\\ref{ass_Fvar}, states that time-average of $F(x_j(t))$ multiplied by an exponential function (low-pass filter) is equal to the expected value scaled by $\\tau\/2$; \nthe second and third, Eq~\\ref{ass_nzFa} and Eq~\\ref{ass_nzFb}, address $N_j$ and $F(x_k(t))$, for $j \\not= k$ and $j=k$ respectively (similarly for Eq~\\ref{ass_Fcov}). \n\nIn all of the definitions for the expected values with $\\rho_{2D}$, note that the underlying correlation correlation $c_{jk}$ depend on the pair of interest $(j,k)$. \nFinally, we assume that the activity variables $(x_j,x_k)$ are pairwise normally distributed with the subsequent statistics; this is sufficient to ``close\" our model and solve for the statistical quantities self-consistently. \nThis is implicitly a weak coupling assumption because with no coupling, $(x_j,x_k)$ are bivariate normal random variables. \n\nThe resulting approximations for the mean activity are:\n\\begin{align}\n\t& \\mu(1) = \\mu_1 + \\sum_{k=2,3,5,6} g_{1k}\\int F(\\sigma(k)y+\\mu(k))\\,\\rho_{SN}(y)\\,dy\t\\label{mn_x1} \\\\\n\t& \\mu(2) = \\mu_2 + g_{21}\\int F(\\sigma(1)y+\\mu(1))\\,\\rho_{SN}(y)\\,dy \t\t\\label{mn_x2} \\\\\n\t& \\mu(3) = \\mu_3 + g_{31}\\int F(\\sigma(1)y+\\mu(1))\\,\\rho_{SN}(y)\\,dy \t\t\\label{mn_x3} \\\\ \n\t& \\mu(4) = \\mu_4 + \\sum_{k=2,3,5,6} g_{4k}\\int F(\\sigma(k)y+\\mu(k))\\,\\rho_{SN}(y)\\,dy \\label{mn_x4} \\\\\t\n\t& \\mu(5) = \\mu_5 + g_{54}\\int F(\\sigma(4)y+\\mu(4))\\,\\rho_{SN}(y)\\,dy \t\t\\label{mn_x5} \\\\\n\t& \\mu(6) = \\mu_6 + g_{64}\\int F(\\sigma(4)y+\\mu(4))\\,\\rho_{SN}(y)\\,dy. \t\t\\label{mn_x6}\n\\end{align}\nThe resulting approximation to the variances of the mean activity are:\n\\begin{eqnarray}\n\t\\tau\\sigma^2(1) &=& \\frac{\\sigma_1^2}{2} + \\sum_{k=2,3,5,6} \\frac{g^2_{1k}}{2} \\text{Var}\\big(F(\\sigma(k)Y+\\mu(k))\\big) \\nonumber \\\\\n\t\t\t\t& &+ \\sum_{(j,k)\\in\\{(2,3);(5,6)\\}} g_{1j}g_{1k}\\text{Cov}\\big(F(\\sigma(j)Y_1+\\mu(j)),F(\\sigma(k)Y_2+\\mu(k))\\big) \\label{var_x1} \\\\\n\t\\tau\\sigma^2(2) &=& \\frac{\\sigma_2^2}{2} + \\frac{g^2_{21}}{2} \\text{Var}\\big(F(\\sigma(1)Y+\\mu(1))\\big) \\nonumber \\\\\t\n\t\t\t\t& & + \\sigma_2 g_{21} \\iint \\frac{y_1}{\\sqrt{2}} F(\\sigma(1)y_2+\\mu(1))\\,\\rho_{2D}(y_1,y_2)\\,dy_1 dy_2 \\label{var_x2} \\\\\n\t\\tau\\sigma^2(3) &=& \\frac{\\sigma_3^2}{2} + \\frac{g^2_{31}}{2} \\text{Var}\\big(F(\\sigma(1)Y+\\mu(1))\\big) \\nonumber \\\\\t\n\t\t\t\t& & + \\sigma_3 g_{31} \\iint \\frac{y_1}{\\sqrt{2}} F(\\sigma(1)y_2+\\mu(1))\\,\\rho_{2D}(y_1,y_2)\\,dy_1 dy_2 \\label{var_x3} \\\\\n\t\\tau\\sigma^2(4) &=& \\frac{\\sigma_4^2}{2} + \\sum_{k=2,3,5,6} \\frac{g^2_{4k}}{2} \\text{Var}\\big(F(\\sigma(k)Y+\\mu(k))\\big) \\nonumber \\\\\n\t\t\t\t& &+ \\sum_{(j,k)\\in\\{(2,3);(5,6)\\}} g_{4j}g_{4k}\\text{Cov}\\big(F(\\sigma(j)Y_1+\\mu(j)),F(\\sigma(k)Y_2+\\mu(k))\\big) \\\\\n\t\\tau\\sigma^2(5) &=& \\frac{\\sigma_5^2}{2} + \\frac{g^2_{54}}{2} \\text{Var}\\big(F(\\sigma(4)Y+\\mu(4))\\big) \\nonumber \\\\\t\n\t\t\t\t& & + \\sigma_5 g_{54} \\iint \\frac{y_1}{\\sqrt{2}} F(\\sigma(4)y_2+\\mu(4))\\,\\rho_{2D}(y_1,y_2)\\,dy_1 dy_2 \\label{var_x5} \\\\\n\t\\tau\\sigma^2(6) &=& \\frac{\\sigma_6^2}{2} + \\frac{g^2_{64}}{2} \\text{Var}\\big(F(\\sigma(4)Y+\\mu(4))\\big) \\nonumber \\\\\t\n\t\t\t\t& & + \\sigma_6 g_{64} \\iint \\frac{y_1}{\\sqrt{2}} F(\\sigma(4)y_2+\\mu(4))\\,\\rho_{2D}(y_1,y_2)\\,dy_1 dy_2 \\label{var_x6} \t\t\t\t\n\\end{eqnarray}\n\nIn Eq~\\ref{mn_x1}--\\ref{var_x6}, all of the $\\text{Var}$ and $\\text{Cov}$ are with respect to $Y\\sim \\mathcal{N}(0,1)$ (for $\\text{Var}$) and \n $(Y_1,Y_2) \\sim \\mathcal{N}\\left( \\left(\\begin{smallmatrix} 0 \\\\ 0 \\end{smallmatrix}\\right) , \\frac{1}{2} \\left(\\begin{smallmatrix} 1 & c_{jk} \\\\ c_{jk} & 1 \\end{smallmatrix}\\right) \\right)$ (for $\\text{Cov}$);\nboth are easy to calculate. The value $c_{jk}$ depends on the pairs; for example in Eq~\\ref{var_x2}, the $\\rho_{2D}$ has $c_{jk}=c_{OB}$, the background correlation value in the olfactory bulb but \nin Eq~\\ref{var_x1}, the $\\text{Cov}$ term is with respect to $\\rho_{2D}$ with $c_{jk}=c_{PC}$, the background correlation value in the piriform cortex. \n\nLastly, we state the formulas for the approximations to the covariances. Although there are 15 total covariance values, we are only concerned with 6 covariance values (3 within OB and 3 within PC); we neglect all covariances \\textit{between} regions.\nFirst, our experimental data set shows that these covariance (and correlation) values are small (see Fig S9 in S2 Text). \nSecond, because there is no background correlation (i.e., common input) between PC and OB in our model, \nany nonzero covariance\/correlation arises strictly via direct coupling. Thus, we cannot view OB-PC covariance from coupling as a small perturbation of the background (uncoupled) state; we do not expect our model to yield qualitatively accurate predictions for these statistics. The formulas for the Cov of interest are:\n\\begin{eqnarray}\n\t\\tau \\text{Cov}(1,2) &=& \\frac{1}{2}c_{OB}\\sigma_1\\sigma_2 +\\sigma_1 \\frac{g_{21}}{2} \\int \\frac{y}{\\sqrt{2}} F(\\sigma(1)y+\\mu(1))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\n\t\t\t& & +\\sigma_2\\frac{g_{12}}{2}\\int \\frac{y}{\\sqrt{2}} F(\\sigma(2)y+\\mu(2))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\t \t\n\t\t\t& & +\\sigma_2\\frac{g_{13}}{2}\\int \\frac{y}{\\sqrt{2}} F(\\sigma(3)y+\\mu(3))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\t\n\t\t\t& & +\\frac{1}{2}\\sum_{(j,k)} g_{1j} g_{2k} \\mathcal{C}(j,k) \\label{cov_12} \\\\\n\t\\tau \\text{Cov}(1,3) &=& \\frac{1}{2}c_{OB}\\sigma_1\\sigma_3 +\\sigma_1 \\frac{g_{31}}{2} \\int \\frac{y}{\\sqrt{2}} F(\\sigma(1)y+\\mu(1))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\n\t& & +\\sigma_3\\frac{g_{12}}{2}\\int \\frac{y}{\\sqrt{2}} F(\\sigma(2)y+\\mu(2))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\t \t\n\t\t\t& & +\\sigma_3\\frac{g_{13}}{2}\\int \\frac{y}{\\sqrt{2}} F(\\sigma(3)y+\\mu(3))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\t\n\t\t\t& & +\\frac{1}{2}\\sum_{(j,k)} g_{1j} g_{3k} \\mathcal{C}(j,k) \\label{cov_13} \\\\\n\t\\tau \\text{Cov}(2,3) &=& \\frac{1}{2}c_{OB}\\sigma_2\\sigma_3 +\\frac{g_{21}g_{31}}{2} \\text{Var}\\big(F(\\sigma(1)Y+\\mu(1))\\big) \t\\nonumber \\\\\t\n\t\t\t & & + \\frac{\\sigma_3 g_{21}+\\sigma_2 g_{31}}{2}\\iint \\frac{y_1}{\\sqrt{2}} F(\\sigma(1)y_2+\\mu(1))\\,\\rho_{2D}(y_1,y_2)\\,dy_1 dy_2 \\label{cov_23} \\\\\n\t\\tau \\text{Cov}(4,5) &=& \\frac{1}{2}c_{PC}\\sigma_4\\sigma_5 +\\sigma_4 \\frac{g_{54}}{2} \\int \\frac{y}{\\sqrt{2}} F(\\sigma(4)y+\\mu(4))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\n\t\t\t\t& & +\\sigma_5\\frac{g_{45}}{2}\\int \\frac{y}{\\sqrt{2}} F(\\sigma(5)y+\\mu(5))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\t \t\n\t\t\t& & +\\sigma_5\\frac{g_{46}}{2}\\int \\frac{y}{\\sqrt{2}} F(\\sigma(6)y+\\mu(6))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\t\n\t\t\t& & +\\frac{1}{2}\\sum_{(j,k)} g_{4j} g_{5k} \\mathcal{C}(j,k) \\label{cov_45} \\\\\n\t\\tau \\text{Cov}(4,6) &=& \\frac{1}{2}c_{PC}\\sigma_4\\sigma_6 +\\sigma_4 \\frac{g_{64}}{2} \\int \\frac{y}{\\sqrt{2}} F(\\sigma(4)y+\\mu(4))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\n\t\t\t\t\t& & +\\sigma_6\\frac{g_{45}}{2}\\int \\frac{y}{\\sqrt{2}} F(\\sigma(5)y+\\mu(5))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\t \t\n\t\t\t& & +\\sigma_6\\frac{g_{46}}{2}\\int \\frac{y}{\\sqrt{2}} F(\\sigma(6)y+\\mu(6))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\t\n\t\t\t& & +\\frac{1}{2}\\sum_{(j,k)} g_{4j} g_{6k} \\mathcal{C}(j,k) \\label{cov_46} \\\\\n\t\\tau \\text{Cov}(5,6) &=& \\frac{1}{2}c_{PC}\\sigma_5\\sigma_6 +\\frac{g_{54}g_{64}}{2} \\text{Var}\\big(F(\\sigma(4)Y+\\mu(4))\\big) \t\\nonumber \\\\\t\n\t\t\t & & + \\frac{\\sigma_6 g_{54}+\\sigma_5 g_{64}}{2}\\iint \\frac{y_1}{\\sqrt{2}} F(\\sigma(4)y_2+\\mu(4))\\,\\rho_{2D}(y_1,y_2)\\,dy_1 dy_2 \\label{cov_56} \n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n\\mathcal{C}(j,k) =& \\iint F(\\sigma(j)y_1+\\mu(j))F(\\sigma(k)y_2+\\mu(k))\\rho_{2D}(y_1,y_2)\\,dy_1dy_2 \\nonumber \\\\\n\t\t\t&- \\left( \\int F(\\sigma(j)y+\\mu(j))\\rho_{SN}(y)\\,dy \\right) \\left( \\int F(\\sigma(k)y+\\mu(k))\\rho_{SN}(y)\\,dy \\right) \\label{mathcal_defn}\n\\end{eqnarray}\n\n\\subsubsection*{Iteration procedure to solve for the approximate statistics self-consistently}\nBased on the approximations and resulting equations described in the previous section, our objective is to solve for the statistics of $x_j$ self-consistently. Once these are determined, the statistics of the firing rates $F(x_j)$ are approximated with the \nsame pairwise normal assumption on $(x_j,x_k)$; we are {\\bf not} assuming that $(F(x_j),F(x_k))$ are bivariate normal random variables. \n\nWe use a simple iterative procedure to solve the system of coupled algebraic expression for the statistics of $x_j$. \nWe first solve the system in the absence of coupling (i.e. Eq~\\ref{eqn:mu_uncoupled}, \\ref{eqn:cov_uncoupled}), and use these values to start the iteration;\nat each step, the formulas for the means (Eq~\\ref{mn_x1}--\\ref{mn_x6}), variances (Eq~\\ref{var_x1}--\\ref{var_x6}), and covariances (Eq~\\ref{cov_12}--\\ref{cov_56}) are \nrecalculated numerically, using the results of the previous step. The iteration stops once \\underline{all 18} statistical quantities of the activity match up to a relative tolerance of $10^{-6}$ (convergence), or after 50 total iterations (non-convergence). The \nresult with a given parameter set can either be: i) convergence, ii) non-convergence, iii) a pair of statistics with invalid covariance (non-positive definite covariance matrix), which is checked after i) and ii). We only consider parameter sets \nwhere the iteration has converged and all of the covariances are valid, after which we determine whether the constraints are satisfied.\n \nOne subtle point is that we did not use any of the numerically calculated $\\text{Cov}$ values in the bivariate normal distributions $\\rho_{2D}$; rather, the correlation value is always $c_{jk}$ which is either 0, $c_{OB}$, \nor $c_{PC}$ depending on the pair. In principle, one can use a fully iterative procedure where the formulas for the $\\text{Cov}$ (Eq~\\ref{cov_12}--\\ref{cov_56}) are used in $\\rho_{2D}$; however, we found that the resulting covariance matrices (for \n$\\rho_{2D}$) can fail to be positive semi-definite. \nHandling this case requires additional code in the program and slower calculations for each parameter set, which \ndetracts from \nthe purpose of our method. We checked some parameter sets comparing the results of the two procedures, \nand the results are quantitatively similar.\n\nThe standard normal $\\rho_{SN}$ and bivariate $\\rho_{2D}$ PDFs have state variable(s) $y_{1,2}$ discretized from -3 to 3 with a mesh size of 0.01; integrals in Eq~\\ref{mn_x1}--\\ref{cov_56} are computed using the trapezoidal rule.\n\n\\subsubsection*{Simplified network with four coupling parameters}\n\nTo further simplify the network, we:\n\\begin{itemize}\n\t\\item set $\\tau=1$,\n\t\\item assume feedforward inhibitory connections within a region have the same strength: $g_{21} = g_{31} =: gIO$ and $g_{54} = g_{64} =: gIP$,\n\t\\item assume cross-region excitatory connections are equal from the presynaptic cell, i.e., $g_{15} = g_{16} =: gEP$ and $g_{42} = g_{43} =: gEO$.\n\t\\item assume $\\sigma_1=\\sigma_2=\\sigma_3=:\\sigma_{OB}$ and $\\sigma_4=\\sigma_5=\\sigma_6=:\\sigma_{PC}$\n\t\\item assume $g_{12}=g_{13}=g_{45}=g_{46}=:g_\\epsilon=0.1$\n\\end{itemize}\nNow there are only 4 variable coupling parameters: $gIO$, $gEO$, $gIP$, $gEP$.\n\n\n\nThe above formulas for the statistics of $x_j$ reduce to:\n\\begin{eqnarray}\n\t\\mu(1) &=& \\mu_1 + gEP \\int \\Big( F(\\sigma(5)y+\\mu(5)) + F(\\sigma(6)y+\\mu(6)) \\Big)\\,\\rho_{SN}(y)\\,dy\t\\nonumber \\\\\n\t\t & & \t + g_\\epsilon \\int \\Big( F(\\sigma(2)y+\\mu(2)) + F(\\sigma(3)y+\\mu(3)) \\Big)\\,\\rho_{SN}(y)\\,dy \\label{mnX1} \\\\\n\t\\mu(2) &=& \\mu_2 + gIO\\int F(\\sigma(1)y+\\mu(1))\\,\\rho_{SN}(y)\\,dy \t\t\\label{mnX2} \\\\\n\t\\mu(3) &=& \\mu_3 + gIO\\int F(\\sigma(1)y+\\mu(1))\\,\\rho_{SN}(y)\\,dy \t\t\\label{mnX3} \\\\ \n\t\\mu(4) &=& \\mu_4 + gEO \\int \\Big( F(\\sigma(2)y+\\mu(2)) + F(\\sigma(3)y+\\mu(3)) \\Big)\\,\\rho_{SN}(y)\\,dy \t \\nonumber \\\\\n\t\t & & + g_\\epsilon \\int \\Big( F(\\sigma(5)y+\\mu(5)) + F(\\sigma(6)y+\\mu(6)) \\Big)\\,\\rho_{SN}(y)\\,dy\t\\label{mnX4} \\\\\t\n\t\\mu(5) &=& \\mu_5 + gIP\\int F(\\sigma(4)y+\\mu(4))\\,\\rho_{SN}(y)\\,dy \t\t\\label{mnX5} \\\\\n\t\\mu(6) &=& \\mu_6 + gIP \\int F(\\sigma(4)y+\\mu(4))\\,\\rho_{SN}(y)\\,dy; \t\t\\label{mnX6}\n\\end{eqnarray}\nthe variances are: \n\\begin{eqnarray}\n\t\\sigma^2(1) &=& \\frac{\\sigma^2_{OB}}{2} + \\frac{(gEP)^2}{2} \\text{Var}\\Big(F(\\sigma(5)Y_1+\\mu(5)) + F(\\sigma(6)Y_2+\\mu(6))\\Big) \\nonumber \\\\\t\n\t\t\t& & +\\frac{g^2_\\epsilon}{2} \\text{Var}\\Big(F(\\sigma(2)Y_1+\\mu(2)) + F(\\sigma(3)Y_2+\\mu(3))\\Big) \\label{varX1} \\\\\n\t\\sigma^2(2) &=& \\frac{\\sigma^2_{OB}}{2} + \\frac{(gIO)^2}{2} \\text{Var}\\big(F(\\sigma(1)Y+\\mu(1))\\big) \\nonumber \\\\\t\n\t\t\t\t& & + \\sigma_{OB} gIO \\iint \\frac{y_1}{\\sqrt{2}} F(\\sigma(1)y_2+\\mu(1))\\,\\rho_{2D}(y_1,y_2)\\,dy_1 dy_2 \\label{varX2} \\\\\n\t\\sigma^2(3) &=& \\sigma^2(2)\t \\label{varX3} \\\\\n\t\\sigma^2(4) &=& \\frac{\\sigma^2_{PC}}{2} + \\frac{(gEO)^2}{2} \\text{Var}\\Big(F(\\sigma(2)Y_1+\\mu(2)) + F(\\sigma(3)Y_2+\\mu(3))\\Big) \\nonumber \\\\\n\t\t& & +\\frac{g^2_\\epsilon}{2} \\text{Var}\\Big(F(\\sigma(5)Y_1+\\mu(5)) + F(\\sigma(6)Y_2+\\mu(6))\\Big) \\label{varX4}\t\\\\\n\t\\sigma^2(5) &=& \\frac{\\sigma^2_{PC}}{2} + \\frac{(gIP)^2}{2} \\text{Var}\\big(F(\\sigma(4)Y+\\mu(4))\\big) \\nonumber \\\\\t\n\t\t\t\t& & + \\sigma_{PC} gIP \\iint \\frac{y_1}{\\sqrt{2}} F(\\sigma(4)y_2+\\mu(4))\\,\\rho_{2D}(y_1,y_2)\\,dy_1 dy_2 \\label{varX5} \\\\\n\t\\sigma^2(6) &=& \\sigma^2(5); \\label{varX6} \t\t\t\t\n\\end{eqnarray}\nthe covariances are:\n\\begin{eqnarray}\n\t\\text{Cov}(1,2) &=& \\frac{1}{2}c_{OB}\\sigma^2_{OB} +\\sigma_{OB} \\frac{gIO}{2} \\int \\frac{y}{\\sqrt{2}} F(\\sigma(1)y+\\mu(1))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\n\t\t\t\t& & \\sigma_{OB} \\frac{g_\\epsilon}{2} \\int \\frac{y}{\\sqrt{2}} F(\\sigma(2)y+\\mu(2))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\n\t\t\t\t& & g_\\epsilon gIO * \\mathcal{C}(1,2) \\label{covX12} \\\\\n\t\\text{Cov}(1,3) &=& \\text{Cov}(1,2) \\label{covX13} \\\\\n\t\\text{Cov}(2,3) &=& \\frac{1}{2}c_{OB}\\sigma^2_{OB}+\\frac{g^2_{IO}}{2} \\text{Var}\\big(F(\\sigma(1)Y+\\mu(1))\\big) \t\\nonumber \\\\\t\n\t\t\t & & + \\sigma_{OB} gIO \\iint \\frac{y_1}{\\sqrt{2}} F(\\sigma(1)y_2+\\mu(1))\\,\\rho_{2D}(y_1,y_2)\\,dy_1 dy_2 \\label{covX23} \\\\\n\t\\text{Cov}(4,5) &=& \\frac{1}{2}c_{PC}\\sigma^2_{PC} +\\sigma_{PC} \\frac{gIP}{2} \\int \\frac{y}{\\sqrt{2}} F(\\sigma(4)y+\\mu(4))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\n\t\t\t& & \\sigma_{PC} \\frac{g_\\epsilon}{2} \\int \\frac{y}{\\sqrt{2}} F(\\sigma(5)y+\\mu(5))\\,\\rho_{SN}(y)\\,dy \\nonumber \\\\\n\t\t\t\t& & g_\\epsilon gIP * \\mathcal{C}(4,5)\t\\label{covX45} \\\\\n\t\\text{Cov}(4,6) &=& \\text{Cov}(4,5) \\label{covX46} \\\\\n\t\\text{Cov}(5,6) &=& \\frac{1}{2}c_{PC}\\sigma^2_{PC} +\\frac{g^2_{IP}}{2} \\text{Var}\\big(F(\\sigma(4)Y+\\mu(4))\\big) \t\\nonumber \\\\\t\n\t\t\t & & + \\sigma_{PC} gIP \\iint \\frac{y_1}{\\sqrt{2}} F(\\sigma(4)y_2+\\mu(4))\\,\\rho_{2D}(y_1,y_2)\\,dy_1 dy_2 \\label{covX56} \n\\end{eqnarray}\nSee Eq~\\ref{mathcal_defn} for the definition of $\\mathcal{C}$.\n\n\\subsection*{Leaky Integrate-and-Fire Model of the OB--PC Circuit}\n\nWe use a generic spiking neural network model of leaky integrate-and-fire neurons to test the results of the theory. \nThere were $N_{OB}=100$ total OB cells, of which we set 80\\% (80) to be granule (I-)cells and 20\\% (20) to be mitral\/tufted ({\\bf M\/T}) E-cells. There are known to be many \nmore granule cells than M\/T cells in the OB; this ratio of 4-to-1 is similar to other models of OB (see~\\cite{grabska17} who used 3-to-1). \nThe equations for the OB cells are, indexed by $k\\in\\{1,2,\\dots,N_{OB}\\}$:\n\\begin{eqnarray}\\label{ob_lif}\n\t\\tau_m \\frac{d v_k}{dt} & = & \\mu_{OB}-v_k-g_{k, XI}(t)(v_k-\\mathcal{E}_I)-g_{k, XE}(t)(v_k-\\mathcal{E}_E) \\nonumber \\\\\t\n\t & & - g_{k,XPC}(t - \\tau_{\\Delta,PC})(v_k-\\mathcal{E}_I) +\\sigma_{OB}\\left(\\sqrt{1-\\tilde{c}_{OB}}\\eta_k(t) + \\sqrt{\\tilde{c}_{OB}}\\xi_o(t) \\right) \\nonumber \\\\\n\tv_k(t^*) & \\geq & \\theta_k \\Rightarrow v_k(t^*+\\tau_{ref})=0 \\nonumber \\\\\n\tg_{k,XE}(t) &=& \\frac{\\gamma_{XE}}{p_{XE} \\left(0.2 N_{OB} \\right) }\\sum_{k'\\in\\{\\hbox{ presyn OB E-cells}\\} } G_{k'}(t) \\nonumber \\\\\n\tg_{k,XI}(t) &=& \\frac{\\gamma_{XI}}{p_{XI} \\left(0.8 N_{OB} \\right)}\\sum_{k'\\in\\{\\hbox{presyn OB I-cells}\\}} G_{k'}(t) \\nonumber \\\\\n\tg_{k,XPC}(t) &=& \\frac{\\gamma_{X,PC}}{p_{X,PC} \\left(0.8 N_{PC} \\right)} \\sum_{j'\\in\\{\\hbox{presyn PC E-cells}\\}} G_{j'}(t) \\nonumber \\\\\n\t\\tau_{d,X}\\frac{d G_k}{dt} &=& -G_k + A_k \\nonumber \\\\\n\t\\tau_{r,X} \\frac{d A_k}{dt} &=& -A_k + \\tau_{r,X} \\alpha_X \\sum_{l} \\delta(t-t_{k,l}). \\label{eqn:OB_LIF_all}\n\\end{eqnarray}\nThe conductance values in the first equation $g_{k,XI}$, $g_{k,XE}$, and $g_{k,XPC}$ depend on the type of neuron $v_k$ ($X\\in\\{ E, I\\}$). The last conductance, \n$g_{X,PC}(t - \\tau_{\\Delta,PC})(v_k-\\mathcal{E}_E)$, models the excitatory presynaptic input (feedback) from the PC cells with a time delay of $\\tau_{\\Delta,PC}$. The conductance variables $g_{k,XY}(t)$ are dimensionless because this model was \nderived from scaling the original (raw) conductance variables by the leak conductance with the same dimension. \nThe leak, inhibitory and excitatory reversal potentials are 0, $\\mathcal{E}_I$, and $\\mathcal{E}_E$, respectively with $\\mathcal{E}_I<0<\\mathcal{E}_E$ \n(the voltage is scaled to be dimensionless, see Table~\\ref{table:lif_parms}). \n$\\xi_k(t)$ are uncorrelated white noise processes and $\\xi_o(t)$ is the common noise term to all $N_{OB}$ cells.\n\nThe second equation describes the refractory period at spike time $t^*$: when the neuron's voltage crosses \nthreshold $\\theta_j$ (see below for distribution of thresholds), \nthe neuron goes into a refractory period for $\\tau_{ref}$, after which we set the neuron's voltage to 0. \n\nThe parameter $\\gamma_{XY}$ gives the relative weight of a connection from neuron type $Y$ to neuron type $X$; the parameter $p_{XY}$ is the probability that any such connection exists ($X,Y\\in\\{E,I\\}$). $G_k$ is the synaptic variable associated with each cell, and dependent only on that cell's spike times; its dynamics are given by the final two equations in Eq~\\ref{eqn:OB_LIF_all} and depend on whether $k \\in \\{E,I\\}$.\n\nFinally, two of the parameters above can be equated with coupling parameters in the reduced model:\n\\begin{equation}\ngEP = \\gamma_{E,PC}; \\quad gIO = \\gamma_{EI}\n\\end{equation}\nwhich are dimensionless scale factors for the synaptic conductances.\n\n\n\\begin{table}[!ht]\n\\centering\n\\caption{{\\bf Fixed parameters for the LIF OB--PC model, see Eqs~\\ref{ob_lif}--\\ref{pc_lif}. }}\n\\label{table:lif_parms}\n\\begin{tabular}{|lcccccccccc|}\n\\hline\n\\multicolumn{11}{|c|}{\\textbf{Same for both OB and PC}} \\\\ \\hline\n\\textbf{Parameter} & $\\tau_m$ & $\\tau_{ref}$ & $\\mathcal{E}_I$ & $\\mathcal{E}_E$ & $\\tau_{d,I}$ & $\\tau_{r,I}$ & $\\tau_{d,E}$ & $\\tau_{r,E}$ & $\\alpha_I$ & $\\alpha_E$ \\\\ \\hline\n & 20\\,ms & 2\\,ms & -2.5 & 6.5 & 10\\,ms & 2\\,ms & 5\\,ms & 1\\,ms \t\t\t\t\t\t\t\t& 2\\,Hz & 1\\,Hz \\\\ \\thickhline\n\\textbf{Parameter} & $N$ \t & Spont. $\\mu$ & Evoked $\\mu$ & $\\sigma$ & $\\tilde{c}$ & $\\gamma_{EE}$ & $\\gamma_{IE}$ & $\\gamma_{II}$ & $\\tau_{\\Delta,PC\/OB}$ & \\\\ \\hline\n\\textbf{OB} & 100 \t\t & 0.6 & 0.9$^*$ \t\t\t\t& 0.05 & 0.5 & 2 & 4 & 2 \t\t & \t10\\,ms & $ $ \\\\\n\\textbf{PC} & 100 \t\t & 0 & 0.4 \t\t\t\t\t& 0.1 & 0.8 & 5 & 8 & 6 \t\t & \t5\\,ms & $ $ \\\\ \\hline\n\\end{tabular}\n\\begin{flushleft} All 12 probabilities of connections are set to $p_{XY}=0.30$; otherwise connections were chosen randomly and independently (Erd\\H{o}s-R\\'enyi graphs). \nThe synaptic time delay from OB to PC is $\\tau_{\\Delta,OB}=10\\,$ms, and from PC to OB is $\\tau_{\\Delta,PC}=5\\,$ms. \nThe scaled voltages from mV is: (V+Vreset)\/(Vth+Vreset), corresponding for \nexample to Vreset=Vleak=-65\\,mV, Vth=-55\\,mV (on average), excitatory reversal potential of 0\\,mV and inhibitory reversal potential of -90\\,mV. {\\bf *}Note: in the evoked state, \nonly the {\\bf M\/T} (E-cells) in OB receive a larger $\\mu$ input from 0.6 to 0.9; the granule cells in OB have $\\mu=0.6$ even in the evoked state.\n\\end{flushleft}\n\\end{table}\n\nThe PC cells had similar functional form but with different parameters (see Table~\\ref{table:lif_parms} for parameter values). We modeled $N_{PC}=100$ total PC cells, of which 80\\% were excitatory and 20\\% inhibitory. \nThe equations, indexed by $j\\in\\{1,2,\\dots,N_{PC}\\}$ are:\n\\begin{eqnarray}\\label{pc_lif}\n\t\\tau_m \\frac{d v_j}{dt} & = & \\mu_{PC}-v_j-g_{j,XI}(t)(v_j-\\mathcal{E}_I)-g_{j,XE}(t)(v_j-\\mathcal{E}_E) \\nonumber \\\\\t\n\t & & - g_{j,XOB}(t - \\tau_{\\Delta,OB})(v_j-\\mathcal{E}_E) +\\sigma_{PC}\\left(\\sqrt{1-\\tilde{c}_{PC}}\\eta_j(t) + \\sqrt{\\tilde{c}_{PC}}\\xi_p(t) \\right) \\nonumber \\\\\n\tv_j(t^*) & \\geq & \\theta_j \\Rightarrow v_j(t^*+\\tau_{ref})=0 \\nonumber \\\\\n\tg_{j,XE}(t) &=& \\frac{\\gamma_{XE}}{p_{XE} \\left(0.8 N_{PC} \\right)}\\sum_{j'\\in\\{\\hbox{presyn PC E-cells}\\}} G_{j'}(t) \\nonumber \\\\\n\tg_{j,XI}(t) &=& \\frac{\\gamma_{XI}}{p_{XI} \\left(0.2 N_{PC} \\right)}\\sum_{j'\\in\\{\\hbox{presyn PC I-cells}\\}} G_{j'}(t) \\nonumber \\\\\n\t\tg_{j,XOB}(t) &=& \\frac{\\gamma_{X,OB}}{p_{X,OB} \\left(0.2 N_{OB} \\right)} \\sum_{k'\\in\\{\\hbox{presyn OB E-cells}\\}} G_{k'}(t) \\nonumber \\\\\n\t\\tau_{d,X}\\frac{d G_j}{dt} &=& -G_j + A_j \\nonumber \\\\\n\t\\tau_{r,X} \\frac{d A_j}{dt} &=& -A_j + \\tau_{r,X} \\alpha_X \\sum_{l} \\delta(t-t_{j,l}).\n\\end{eqnarray}\nExcitatory synaptic input from the OB cells along the lateral olfactory tract is modeled by: $g_{X,OB}(t - \\tau_{\\Delta,OB})(v_j-\\mathcal{E}_E)$. The common noise term for the \nPC cells $\\xi_p(t)$ is independent of the common noise term for the OB cells $\\xi_o(t)$. \nTwo of the parameters above can be equated with coupling parameters in the reduced model:\n\\begin{equation}\ngEO = \\gamma_{E,OB}; \\quad gIP = \\gamma_{EI}\n\\end{equation}\n\n\nThe values of the parameters \nthat were not stated in Table~\\ref{table:lif_parms} were varied in the paper: \n$$ gIO, \\hspace{.5in} gEO, \\hspace{.5in} gIP, \\hspace{.5in} gEP. $$\n\nTo model two activity states, we allowed mean inputs to vary (see Table~\\ref{table:lif_parms}). In contrast to the reduced model, we increased both inputs to PC cells (from $\\mu_{PC}=0$ in the spontaneous state to \n$\\mu_{PC}=0.4$ in the evoked state) as well as to OB cells; $\\mu_{OB}=0.6$ in the spontaneous state to $\\mu_{OB}=0.9$ in the evoked state only for {\\bf M\/T} cells (OB granule cell \ninput is the same for spontaneous and evoked). \n\n\nFinally, we model heterogeneity by setting the threshold values $\\theta_j$ in the following way. Both OB and PC cells had the following distributions for $\\theta_j$:\n\\begin{eqnarray}\\label{thres_distr}\n\t\\theta_j &\\sim& e^{\\mathcal{N}} \n\\end{eqnarray}\nwhere $\\mathcal{N}$ is normal distribution with mean $-\\sigma^2_\\theta\/2$ and standard deviation $\\sigma_\\theta$, so that $\\{\\theta_j\\}$ has a \nlog-normal distribution with mean 1 and variance: $e^{\\sigma_\\theta^2}-1$. We set $\\sigma_\\theta=0.1$, which results in firing rates ranges seen in the experimental data. \nSince the number of cells are modest with regards to sampling ($N_{OB}=100$, $N_{PC}=100$), we evenly sampled the log-normal distribution from the 5$^{th}$ to 95$^{th}$ percentiles (inclusive). \n\nWe remark that the synaptic delays of $\\tau_{\\Delta,PC}$ and $\\tau_{\\Delta,OB}$ were set to modest values to capture the appreciable distances between OB and PC. This is a reasonable choice \nbased on evidence that stimulation in PC elicit a response in OB 5-10\\,ms later~\\cite{neville03}.\n\nIn all Monte Carlo simulations of the coupled LIF network, we used a time step of 0.1\\,ms, with 2\\,s of biology time for each of the 50,000 realizations (i.e., over 27.7 hours of biology time), enough simulated statistics to effectively have convergence.\n\n\\section*{Supporting Information}\n\n\\paragraph*{S1 Text.}\n\\label{S1_file}\n{\\bf Experimental Data Statistics by Odor.} This file shows the trial-averaged spiking statistics of the experimental data dissected by a specific odor.\nContains Figs. S1-S8, and Tables S1-S2.\n\n\\paragraph*{S2 Text.}\n\\label{S2_file}\n{\\bf Supplementary Figures for the Main Modeling.} This file contains supplemental figures from modeling and analysis. Contains Figs. S9-S16.\n\n\\paragraph*{S3 Text.}\n\\label{S3_file}\n{\\bf Supplementary Material: Cortical-Cortical Network.} This file contains supplemental modeling results on a generic cortical-cortical coupled network. Contains Figs. S17-S21, and Table S3.\n\n\\paragraph*{S1 Table.} \n{\\bf Average population firing rate by odor and activity state.}\n\\paragraph*{S2 Table.} \n{\\bf Standard deviation of population firing rate by odor and activity state.}\n\\paragraph*{S3 Table.} \n{\\bf Fixed parameters for the LIF Cortical-cortical model.}\n\n\n\\paragraph*{S1 Figure.} \n{\\bf Experimental statistics by odor and activity state: Fano Factor.}\n\\paragraph*{S2 Figure.}\n{\\bf Experimental statistics by odor and activity state: spike count variance.}\n\\paragraph*{S3 Figure.}\n{\\bf Experimental statistics by odor and region: Fano Factor.}\n\\paragraph*{S4 Figure.}\n{\\bf Experimental statistics by odor and region: spike count variance.}\n\\paragraph*{S5 Figure.}\n{\\bf Experimental statistics by odor and activity state: spike count correlation.}\n\\paragraph*{S6 Figure.}\n{\\bf Experimental statistics by odor and activity state: spike count covariance.}\n\\paragraph*{S7 Figure.}\n{\\bf Experimental statistics by odor and region: spike count correlation.}\n\\paragraph*{S8 Figure.}\n{\\bf Experimental statistics by odor and region: spike count covariance.}\n\\paragraph*{S9 Figure.}\n{\\bf Cross-region correlations are smaller than within-region correlations.}\n\\paragraph*{S10 Figure.}\n{\\bf Fast analytic approximation accurately captures statistics of a multi-population firing rate model.}\n\\paragraph*{S11 Figure.}\n{\\bf Experimental observations constrain conductance parameters in analytic model.}\n\\paragraph*{S12 Figure.}\n{\\bf Analytic approximation results are robust to choice of transfer function.}\n\\paragraph*{S13 Figure.}\n{\\bf Mean input to PC must increase in the evoked state.}\n\\paragraph*{S14 Figure.}\n{\\bf Violating derived relationship $gIO < gIP$ results in statistics that are inconsistent with experimental observations.}\n\\paragraph*{S15 Figure.}\n{\\bf Violating derived relationship $gEP > gEO$ results in statistics that are inconsistent with experimental observations.}\n\\paragraph*{S16 Figure.}\n{\\bf Violating derived relationship $gEP, gIP \\gg gEO, gIO$ results in statistics that are inconsistent with experimental observations.}\n\\paragraph*{S17 Figure.}\n{\\bf Minimal firing rate model to analyze synaptic conductance strengths.}\n\\paragraph*{S18 Figure.}\n{\\bf Detailed spiking LIF model confirms the results from analytic rate model.}\n\\paragraph*{S19 Figure.}\n{\\bf Violating derived relationship $\\vert gI1\\vert < \\vert gI2\\vert$ results in statistics that are inconsistent with experimental observations.}\n\\paragraph*{S20 Figure.}\n{\\bf Violating derived relationship $gE2, gI2 \\gg gE1, gI1$ results in statistics that are inconsistent with experimental observations.}\n\\paragraph*{S21 Figure.}\n{\\bf iolating derived relationship $gE2 > gE1$ results in statistics that are inconsistent with experimental observations.}\n\n\n\n\\section*{Author Contributions}\nConceived and designed research: AKB CL. Derived expressions in theoretical methods: CL. Analyzed the data: AKB SHG WLS CL. Conceived and designed electrophysiological experiments: SHG WLS. \nWrote the paper: AKB SHG WLS CL. \n\n\\nolinenumbers\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nMotion planning and task planning have gained an enormous thrust in the robotics community in the past decade or so. Though, motion (task) planning has attracted a great deal of research in the past few decades, however recently, researchers have come up with new metrics and methodology to represent motion and task specifications. Initially, motion planning for a mobile robot started with the aim of moving a point mass from an initial position to a final position in some optimal fashion. With course of time, people started to consider planning in cluttered domains (i.e. in presence of obstacles) and also accounted for the dimensionality and the physical constraints of the robot. \n\nThough we have efficient approaches for general motion planning, very few are available or scalable to plan in dynamic environments or under finite time constraints. Temporal logics have been used greatly \nto address complex motion specifications, motion sequencing and timing behaviors etc. Historically temporal logic was originated for model checking, validation and verification in software community \\cite{baier2008}\nand later on researchers found it very helpful to use Linear Temporal logic (LTL), Computational Tree logic (CTL), Signal Temporal logic (STL) etc. for representing complex motion (or task) specifications. \nThe developments of tools such as SPIN \\cite{SPIN}, NuSMV \\cite{NuSMV} made it easier to check if a given specifications can be met by creating a suitable automaton and looking \nfor a feasible path on that automaton. However, the construction of the automaton from a given temporal logic formula is based on the implicit assumption that there is no time constraints associated with the specification.\n\nCurrently motion planning for robots is in such a stage where it is very crucial to incorporate time constraints since these constraints can arise from different aspects of the problem: dynamic environment, sequential \nprocessing, time optimality etc. Planning with time bounded objectives is inherently hard due to the fact that every transition from one state to another in the associated automaton has to be carried out, \nby some controller, exactly in time from an initial configuration to the final configuration. Time bounded motion planning has been done in heuristic ways \\cite{kant, Erdmann} and also by using \nmixed integer linear programming (MILP) framework \\cite{richards, zhou}. In this paper, we are interested in extending the idea of using LTL for time-unconstrained planning to use MITL for time-constrained \nmotion planning. In \\cite{maity}, the authors proposed a method to represent time constrained planning task as an LTL formula rather than MITL formula. This formulation reduced the complexity of \n\\textsc{Exp-space}-complete for MITL to \\textsc{Pspace}-complete for LTL. However, the number of states in the generated B\\\"{u}chi automata increases with time steps. \n\nIn this paper, we mainly focus on motion planning based on the construction of an efficient timed automaton from a given MITL specification. \nA dedicated controller to navigate the robot can be constructed for the general planning problem\nonce the discrete path is obtained from the automaton. The earlier results on construction of algorithms to verify timing properties of real \ntime systems can be found in \\cite{Alur1996}. The complexity of satisfiability and model checking problems for MTL formulas has been already studied in \\cite{SurveyOuaknine08} and it has been shown that commonly \nused real-time properties such as bounded response and invariance can be decided in polynomial time or exponential space. More works on the decidability on MTL can be found in \\cite{MTLOuaknine05} \nand the references there in. The concept of alternating timed automata for bounded time model checking can be \nfound in \\cite{Jenkins2010}. \\cite{Nickovic2010} talks about constructing deterministic timed automata from MTL specifications and this provides a unified framework to include all the future operators\nof MTL. The key to the approach of \\cite{Nickovic2010} was in separating the continuous time monitoring from the discrete time predictions of the future. We restrict our attention to generate timed automata \nfrom MITL based on the work done in \\cite{Maler2006a}. It is done by constructing a timed automaton to generate a sequence of states and another to check whether the sequence generated is actually a valid one in the sense that it satisfies the given MITL specification.\n\nThe rest of the paper is organized as follows, section \\ref{sec:pre} provides a background on MITL and the timed automata based approach for MITL. Section \\ref{sec:motion_planning} illustrates how the timed automata can be used to motion synthesis and we also provide UPPAAL \\cite{uppaal} implementation of the same. Section \\ref{sec:case} gives some examples on different time bounded tasks and shows the implementation results. Section \\ref{sec:continuous} provides a brief overview of how a continuous trajectory can be generated from the discrete plan. Finally, we conclude in section \\ref{sec:conclusion}.\n\n\\section{Preliminaries} \\label{sec:pre}\n\nIn this paper, we consider a surveying task in an area by a robot whose motion is abstracted to a graph. In particular for our particular setup, the robot motion is captured as a timed automaton (Fig. \\ref{fig:map}). Every edge is a timed transition that represents navigation of the robot from one location to other in space and every vertex of the graph represents a partition of the space. Our objective is to find an optimal time path that satisfies the specification given by timed temporal logic.\n\n\\begin{figure}%\n\\centering\n\\begin{tikzpicture}[->,>=stealth']\n\n\n\n\n \\node[normal] (S0) \n {\\begin{tabular}{l}\n pos0\\\\\n\t$z\\leq 1$\\\\\n \\end{tabular}};\n \n \\node[normal, \n right of=S0, \n node distance=4cm, \n anchor=center] (S1) \n {\\begin{tabular}{l}\n pos1:$B$\\\\\n\t$z\\leq 1$\\\\\n \\end{tabular}\n };\n \n \\node[normal,\n below of=S0,\n yshift=-1cm,\n anchor=center] (S3) \n {\\begin{tabular}{l}\n pos3:$A$\\\\\n\t$z\\leq 1$\\\\\n \\end{tabular}\n };\n\n\n \\node[normal,\n right of=S3,\n node distance=4cm,\n anchor=center] (S2) \n {\\begin{tabular}{l}\n pos2\\\\\n\t$z\\leq 1$\\\\\n \\end{tabular}\n };\n\n\n \\path \n\t(S0) \tedge node[above]{$z\\geq 1 | z:=0$} (S1)\n\t(S1)\tedge (S0)\n (S1) edge node[left,align=center]{$z \\geq 1$\\\\ $z:=0$} (S2)\n\t(S2) \tedge (S1)\n\t(S2)\tedge\tnode[below]{$z\\geq 1 | z=0$} (S3)\n\t(S3) edge (S2)\n\t(S0) \tedge node[left=0cm,align=center]{$z>0$\\\\$z:=0$} (S3)\n\t(S3)\tedge\t (S0);\n \n\\end{tikzpicture}\n\\caption{Timed Automata based on cell decomposition and robot dynamics}\n\\label{fig:map}\n\\end{figure}\n\n \\subsection{Metric Interval Temporal Logic (MITL)}\n\nMetric interval temporal logic is a specification that includes timed temporal specification for model checking. It differs from Linear Temporal Logic on the part that it has constraints on the temporal operators.\n\nThe formulas for LTL are build on atomic propositions by obeying the following grammar.\n\\begin{df} \\label{defLTL}\n \\textit{The syntax of LTL formulas are defined according to the following grammar rules:}\n \\begin{center}\n $\\phi ::= \\top ~| ~\\pi~ |~\\neg \\phi~ | ~\\phi \\vee \\phi ~| ~\\mathbf{X} \\phi | ~\\phi \\mathbf{U} \\phi ~ $\n \\end{center}\n \\end{df}\n$\\pi \\in \\Pi$ the set of propositions, $\\top$ and $\\bot(=\\neg\\top)$ are the Boolean constants $true$ and $false$ respectively. $\\vee$ denotes the disjunction operator and $\\neg$ denotes the negation operator. $\\mathbf{U}$ represents the Until operator. MITL extends the Until operator to incorporate timing constraints.\n\n\\begin{df} \\label{def1}\n \\textit{The syntax of MITL formulas are defined according to the following grammar rules:}\n \\begin{center}\n $\\phi ::= \\top ~| ~\\pi~ |~\\neg \\phi~ | ~\\phi \\vee \\phi ~|~\\phi \\mathbf{U}_I \\phi ~ $\n \\end{center} \n \\end{df}\n where $I\\subseteq [0, \\infty]$ is an interval with end points in $\\mathbb{N} \\cup \\{\\infty\\}$. $\\mathbf{U}_I$ symbolizes the timed Until operator. Sometimes we will represent $\\mathbf{U}_{[0, \\infty]}$ by $\\mathbf{U}$. \nOther Boolean and temporal operators such as conjunction ($\\wedge$), eventually within $I$ ($\\Diamond_I$), always on $I$ ($\\Box_I$) etc. can be represented using the grammar desired in definition \\ref{def1}. For example, we can express time constrained eventually operator $\\Diamond_I\\phi \\equiv \\top \\mathbf{U}_I\\phi$ and so on. In this paper all the untimed temporal logic is transformed into until operator and all the timed operator is transformed to eventually within $I$, to make it easier to generate a timed automaton.\n\nMITL is interpreted over $n$-dimensional Boolean $\\omega$-sequences of the form $\\xi: \\mathbb{N} \\rightarrow \\mathbb{B}^n$, where $n$ is the number of propositions.\n\\begin{df}\\label{ltlsym}\n \\textit{The semantics of any MTL formula $\\phi$ is recursively defined over a trajectory $(\\xi, t)$ as:\\\\\n $(\\xi, t) \\models \\pi$ iff $(\\xi, t)$ satisfies $\\pi$ at time $t$\\\\\n $(\\xi, t) \\models \\neg \\pi$ iff $(\\xi, t)$ does not satisfy $\\pi$ at time $t$\\\\\n $(\\xi, t) \\models \\phi_1\\vee \\phi_2$ iff $(\\xi, t) \\models \\phi_1$ or $(\\xi, t) \\models \\phi_2$\\\\\n $(\\xi, t) \\models \\phi_1\\wedge \\phi_2$ iff $(\\xi, t) \\models \\phi_1$ and $(\\xi, t) \\models \\phi_2$\\\\\n $(\\xi, t) \\models \\bigcirc \\phi$ iff $(\\xi, t+1) \\models \\phi$ \\\\\n $(\\xi, t) \\models \\phi_1\\mathbf{U}_I \\phi_2$ iff $\\exists s \\in I$ s.t. $(\\xi, t+s) \\models \\phi_2$ and $\\forall$ $s' \\leq s, ~ (\\xi, t+s') \\models \\phi_1$.}\n\n\\end{df}\nThus, the expression $\\phi_1 \\mathbf{U}_I \\phi_2$ means that $\\phi_2$ will be true within time interval $I$ and until $\\phi_2$ becomes true, $\\phi_1$ must be true. \nThe MITL operator $\\bigcirc \\phi$ means that the specification $\\phi$ is true at next time instance, $\\Box_I \\phi$ means that $\\phi$ is always true for the time duration $I$, \n$\\Diamond_I \\phi$ means that $\\phi$ will eventually become true within the time interval $I$. Composition of two or more MITL operators can express very sophisticated \nspecifications; for example $\\Diamond_{I_1} \\Box_{I_2} \\phi$ means that within time interval $I_1$, $\\phi$ will be true and from that instance it will hold true always for a duration\nof $I_2$. Other Boolean operators such as implication ($\\Rightarrow$) and equivalence ($\\Leftrightarrow$) can be expressed using the grammar rules and semantics given in definitions \n\\ref{def1} and \\ref{ltlsym}. More details on MITL grammar and semantics can be found in \\cite{MTL}, \\cite{Alur1996}. \n\n\\subsection{MITL and Timed Automata Based Approach}\nAn LTL formula can be transformed into a B\\\"{u}chi automaton which can be used in optimal path synthesis \\cite{Smith2010} and automata based guidance \\cite{wolff_automatonguided_2013}. Similarly, in this paper, we focus on developing a timed automata based approach for MITL based motion planning. MITL, a modification of Metric Temporal Logics (MTL), disallows the punctuation in the temporal interval, so that the left boundary and the right boundary have to be different. \nIn general the complexity of model checking for MTL related logic is higher than that of LTL. The theoretical model checking complexity for LTL is \\textsc{Pspace}-complete \\cite{Sistla1985}. The algorithm that has been implemented is exponential to the size of the formula. MTL by itself is undecidable. \nThe model checking process of MITL includes transforming it into a timed automaton \\cite{Alur1996}\\cite{Maler2006a}. CoFlatMTL and BoundedMTL defined in \\cite{CoFlatMTLBouyer08} are more expressive fragments of MTL than MITL, which can be translated to LTL-Past but with exponential increase in size. SafetyMTL \\cite{MTLOuaknine05} and MTL, evaluated over finite and discrete timed word, can be translated into alternative timed automata. Although theoretically, the results suggest many fragments of MTL are usable, many algorithms developed for model checking are based on language emptiness check, which are very different from the control synthesis i.e. finding a feasible path. From best of our knowledge, the algorithm that is close to implementation for motion planning is that of \\cite{Maler2006a}.\n\nThis paper uses the MITL and timed automaton generation based on \\cite{Maler2006a}. In the following section, the summary of the transformation and our implementation for control synthesis are discussed.\n\n\\section{MITL for Motion Planning}\\label{sec:motion_planning}\n\n\\subsection{MITL to Timed Automata Transformation}\nConsider the following requirements: a robot has to eventually visit an area $A$ and another area $B$ in time interval $[l,r]$, and the area $A$ has to be visited first. This can be captured in the following MITL,\n\\[\n\\phi = (\\neg B \\mathbf{U} A)\\wedge (\\Diamond_{[l,r]} B)\n\\] \n\nIt can be represented by a logic tree structure, where every node that has children is a temporal logic operator and every leaf node is an atomic proposition, as shown in Fig. \\ref{fig:tree}. Every link represents an input output relationship.\n\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}[\nlevel 1\/.style={sibling distance=30mm},level 2\/.style={sibling distance=20mm}, level distance=30pt,\nedge from parent path={\n(\\tikzparentnode) |- \n($(\\tikzparentnode)!0.5!(\\tikzchildnode)$) -|\n(\\tikzchildnode)}] \n\n \\node[normal, text width=1cm] {$\\wedge$}\n child {node[normal,text width=1cm] {$\\mathbf{U}$}\n\t child {node[normal,text width=1cm] {$\\neg$}\n\t\t\tchild {node[normal,text width=1cm]{$p(B)$}}\n\t\t}\n child {node[normal,text width=1cm] {$p(A)$}}\n\t\t}\n child {node[normal,text width=1cm] {$\\Diamond_{[l,r]}$}\n child {node[normal,text width=1cm] {$p(B)$}}\n };\n\\end{tikzpicture}\n\\caption{Logic tree representation of $\\phi$.}\n\\label{fig:tree}\n\\end{figure}\n\nThe authors in \\cite{Maler2006a} propose to change every temporal logic operator into a timed signal transducer, which is a temporal automaton that accepts input and generates output. Based on their definition the Input Output Timed Automaton (IOTA) used in this paper is defined as the following to fit the control synthesis problem,\n\n\\begin{df}[Input Output Timed Automaton]\\label{df:iota}\n\\textit{An input output timed automaton is a tuple $\\mathcal{A} =(\\Sigma, Q, \\Gamma, \\mathcal{C}, \\lambda, \\gamma, I, \\Delta, q_0, F)$, where \\\\\n$\\Sigma$ is the input alphabet, $Q$ is the finite set of discrete states, \\\\\n$\\Gamma$ is the output alphabet, $\\mathcal{C}$ is the set of clock variables, and \\\\\n$I$ is the invariant condition defined by conjunction of inequalities of clock variables. The clock variables can be disabled and activated by setting the rate of the clock $0$ or $1$ in the invariant $I$. \\\\\n$\\lambda : Q \\rightarrow \\Sigma$ is the input function, which labels every state to an input, while \\\\\n$\\gamma : Q \\rightarrow \\Gamma$ is the output function, which labels every state to an output. \\\\\n$\\Delta$ is the transition relationship between states which is defined by $(p,q,r,g)$, where $p$ is the start state, $q$ is the end state, $r$ is the clock resets, and $g$ is the guard condition on the clock variables.\\\\\n $q_0$ is the initial state of the timed automaton. \\\\\n$F$ is the set of B\\\"{u}chi states that have to be visited infinitely often.}\n\\end{df}\n\nThe transformation of Until operator and timed Eventually operator is summarized in Figs. \\ref{fig:pUq}, \\ref{fig:EI_GEN} and \\ref{fig:EI_CHK}. This is based on \\cite{Maler2006a} with minor changes to match with our definition of IOTA. In Fig. \\ref{fig:pUq}, the timed automaton for $p \\mathcal{U} q$ is shown. The inputs outputs of the states are specified in the second line within the box of each state. $p\\bar{q}$ means the inputs are $[1,0]$ and $\\bar{p}$ means the inputs can be $[0,1]$ or $[0,0]$, and $\\gamma=1$ means the output is 1. Transitions are specified in the format of $g|r$. In this case, all the transitions have guard $z>0$ and reset clock $z$. All states in this automaton are B\\\"{u}chi accepting states except $s_{p\\bar{q}}$. The B\\\"{u}chi accepting states are highlighted.\n\n\\begin{figure}%\n\\centering\n\\begin{tikzpicture}[->,>=stealth']\n\n\n\n\n \\node[state,\n\ttext width=2cm] (S0) \n {\\begin{tabular}{l}\n $s_{\\bar{p}}$\\\\\n\t\\quad $\\bar{p}\/\\gamma=0$\\\\\n \\end{tabular}};\n \n \\node[state, \n right of=S0, \n node distance=5cm, \n anchor=center,\n\ttext width=2cm] (S1) \n {%\n \\begin{tabular}{l} \n $\\bar{s}_{p\\bar{q}}$\\\\\n\t\\quad $p\\bar{q}\/\\gamma=0$\\\\\n \\end{tabular}\n };\n \n \\node[normal,\n below of=S0,\n yshift=-2cm,\n anchor=center,\n text width=2cm] (S2) \n {%\n \\begin{tabular}{l}\n $s_{p\\bar{q}}$\\\\\n \\quad $p\\bar{q}\/\\gamma=1$\\\\\n \\end{tabular}\n };\n\n\n \\node[state,\n right of=S2,\n node distance=5cm,\n anchor=center] (S3) \n {%\n \\begin{tabular}{l}\n $s_{pq}$\\\\\n \\quad $pq\/\\gamma=1$\\\\\n \\end{tabular}\n };\n\n\n \\path \n\t(S0.5) \tedge node[above]{$z>0 | z:=0$} (S1.175)\n\t(S1.185)\tedge\tnode[below]{$z>0 | z:=0$} (S0.355)\n (S0.270) \tedge node[left,align=center]{$z>0$\\\\ $z:=0$} (S2.90)\n\t(S2.5) \tedge node[above]{$z>0 | z:=0$} (S3.175)\n\t(S3.185)\tedge\tnode[below]{$z>0 | z:=0$} (S2.355)\n\t(S3.90) \tedge node[right,align=center]{$z>0$\\\\$z:=0$} (S1.270)\n\t(S0.320) \tedge node[left=0cm,align=center]{$z>0$\\\\\\quad\\quad$z:=0$} (S3.160)\n\t(S3.150)\tedge\tnode[right=-0.2cm,align=center]{$z>0$\\\\\\quad\\quad$z:=0$} (S0.332);\n\n\\end{tikzpicture}\n\n\\caption{The timed automaton for $p \\mathcal{U} q$. The inputs and outputs of the states are specified in the second line of each state. $p\\bar{q}$ means the inputs are $[1,0]$ and $\\bar{p}$ means the inputs can be $[0,1]$ or $[0,0]$, and $\\gamma=1$ means the output is 1. Transitions are specified in the format of guard$|$reset. In this case all the transitions have guard $z>0$ and reset clock $z$. All states in this automaton are B\\\"{u}chi accepting states except $s_{p\\bar{q}}$. The B\\\"{u}chi accepting states are highlighted.}%\n\\label{fig:pUq}%\n\\end{figure}\n\n\\begin{figure}%\n\\centering\n\\begin{tikzpicture}[->,>=stealth']\n\n\n\n\n \\node[normal,\n\ttext width=2cm] (S0) \n {\\begin{tabular}{l}\n $\\text{Gen}_1$\\\\\n\t\\quad $x_1'==1$\\\\\n\t\\quad $*\/\\gamma=0$\\\\\n \\end{tabular}};\n\n \\node[normal,\n\ttext width=6cm,\n\tright of=S0,\n\tnode distance=2cm,\n\tyshift=1.5cm] (Init) \n {\\begin{tabular}{l}\n $\\text{Gen}_0$\\\\\n\t\\quad $x_i'==0$, $y_i'==0$, $\\forall i=1,\\ldots, m$\\\\\n \\end{tabular}};\n \n \\node[normal, \n right of=S0, \n node distance=4cm, \n anchor=center,\n\ttext width=2cm] (S1) \n {%\n \\begin{tabular}{l} \n $\\text{Gen}_2$\\\\\n\t\\quad $y_1'==1$\\\\\n\t\\quad $*\/\\gamma=1$\\\\\n \\end{tabular}\n };\n \n \\node[normal,\n below of=S0,\n yshift=-0.5cm,\n anchor=center,\n text width=2cm] (S2) \n {%\n \\begin{tabular}{l}\n $\\text{Gen}_3$\\\\\n\t\\quad $x_2'==1$\\\\\n\t\\quad $*\/\\gamma=0$\\\\\n \\end{tabular}\n };\n\n \\node[normal,\n right of=S2,\n node distance=4cm,\n anchor=center] (S3) \n {%\n \\begin{tabular}{l}\n $\\text{Gen}_4$\\\\\n\t\\quad $y_2'==1$\\\\\n\t\\quad $*\/\\gamma=1$\\\\\n \\end{tabular}\n };\n\n\\node[state] (Sdots) [below of=S2,draw=none,node distance=1cm] {$\\ldots$};\n\\node[state] (Sdots2) [below of=S3,draw=none,node distance=1cm] {$\\ldots$};\n\n \\node[normal,\n below of=Sdots,\n yshift=0.1cm,\n anchor=center,\n text width=2cm,\n\tdraw=black, thin] (S4) \n {%\n \\begin{tabular}{l}\n $\\text{Gen}_{2m-1}$\\\\\n\t\\quad $x_m'==1$\\\\\n\t\\quad $*\/\\gamma=0$\\\\\n \\end{tabular}\n };\n\n\n \\node[normal,\n right of=S4,\n node distance=4cm,\n anchor=center] (S5) \n {%\n \\begin{tabular}{l}\n $\\text{Gen}_{2m}$\\\\\n\t\\quad $y_m'==1$\\\\\n\t\\quad $*\/\\gamma=1$\\\\\n \\end{tabular}\n };\n\n\n \\path \t(S0) \tedge node[above]{$*| y_1:=0$} (S1);\n\t\\path \t(Init.190) \tedge node[right]{$*| x_1:=0$} (S0);\n\t\\path \t(Init.350) \tedge node[right]{$*| y_1:=0$} (S1);\n\t\n \\path \t(S1)\tedge\tnode[right]{$* | x_2:=0$} (S2);\n \\path \t(S2)\tedge\tnode[above]{$* | y_2:=0$} (S3);\n \\path (S3)\tedge\tnode[right=0.2cm,align=left]{$* | x_3:=0$ \\\\ $\\ldots$} (S4);\n\n\t\\path (S4)\tedge\tnode[above]{$* | y_m:=0$} (S5);\n\t\\draw [->] (S5) -- ++(0,-0.7cm) -- node[below]{$* | x_1:=0$} ++(-5.5cm,0cm) |- (S0.west);\n\\end{tikzpicture}\n\n\\caption{The timed automaton for the generator part of $\\Diamond_{I} a$ for motion planning. $2m$ is the number of clocks required to store the states of the timed eventually ($\\Diamond_I$) operator. It is computed based on the interval $I$. Detailed computation and derivation can be found in \\cite{Maler2006a}. $x_i'$ represents the rate of the clock $x_i$. By setting the rate to be 0, we essentially deactivate the clock. The \\lq{$*$}\\rq ~ symbol means that there is no value for that particular input, output or guard for that state. There are no B\\\"{u}chi states since the time is bounded}%\n\\label{fig:EI_GEN}%\n\\end{figure}\n\n\\begin{figure}%\n\\centering\n\\begin{tikzpicture}[->,>=stealth']\n\n\n\n\n \\node[normal,\n\ttext width=1.5cm] (S0) \n {\\begin{tabular}{l}\n $\\text{Chk}_1$\\\\\n\t$y_1\\leq b$\\\\\n\t$\\bar{p}\/*$\\\\\n \\end{tabular}};\n\n \\node[normal,\n\ttext width=1.5cm,\n\tyshift=1.5cm] (Init0) \n {\\begin{tabular}{l}\n $\\text{Chk}_{00}$\\\\\n\t$x_1 \\leq a$\\\\\n \\end{tabular}};\n\n \\node[normal,\n\ttext width=1.5cm,\n\tright of=Init0,\n\tnode distance=4.5cm] (Init1) \n {\\begin{tabular}{l}\n $\\text{Chk}_{01}$\\\\\n\t$y_1 \\leq a$\\\\\n \\end{tabular}};\n \n \\node[normal, \n right of=S0, \n node distance=3cm, \n anchor=center,\n\ttext width=1.5cm] (S1) \n {%\n \\begin{tabular}{l} \n $\\text{Chk}_2$\\\\\n\t$x_2\\leq a$\\\\\n\t$p\/*$\\\\\n \\end{tabular}\n };\n\n \\node[normal, \n right of=S1, \n node distance=3cm, \n anchor=center,\n\ttext width=1.75cm] (S6) \n {%\n \\begin{tabular}{l} \n $\\text{Chk}_3$\\\\\n\t$z] \t(Init1.west) \t-| node[left,yshift=-0.5cm]{$ y_1\\geq a | *$} (S1);\n\t\\draw [->] \t(Init1.east) \t-| node[right,yshift=-0.5cm,align=center]{$y_1\\geq a$\\\\$z:=0$} (S6);\n\t\n \\path \t(S1)\tedge\tnode[right]{$x_2 \\geq a |*$}node[left]{ch!} (S2);\n \\path \t(S2)\tedge\tnode[above]{$ y_2\\geq b|*$} (S3);\n\t\\path \t(S3.5) \tedge node[above]{$ *|z:=0$} (S7.175);\n\t\\path \t(S7.184) \tedge node[below]{} (S3.355);\n\n\n \\path (S3)\tedge\tnode[right=0.2cm,align=left]{$x_3\\geq a | *$ \\\\ $\\ldots$} (S4);\n\n\t\\path (S4)\tedge\tnode[above]{$y_m\\geq b | *$} (S5);\n\t\n\t\\path \t(S5.5) \tedge node[above]{$ *|z:=0$} (S8.175);\n\t\\path \t(S8.184) \tedge node[below]{} (S5.355);\n\t\n\t\\draw [->] (S5) -- ++(0,-0.7cm) -- node[below]{$ x_1 \\geq a | *$} ++(-4.1cm,0cm) |- (S0.west);\n\\end{tikzpicture}\n\n\\caption{The timed automaton for the checker part of $\\Diamond_{I} a$ for motion planning. $2m$ is the number of clocks required for the timed eventually ($\\Diamond_I$) operator. There are no B\\\"{u}chi states since the time is bounded}%\n\\label{fig:EI_CHK}%\n\\end{figure}\n\nThe IOTA for timed eventually ($\\Diamond_{I} a$) is decomposed into two automata, the generator generates predictions of the future outputs of the system, while the checker verifies that the generated outputs actually fit the inputs. Detailed derivations and verifications of the models can be found in \\cite{Maler2006a}. The composition between them is achieved through the shared clock variables. Additional synchronization (`ch!') is added in our case to determine the final satisfaction condition for the control synthesis. A finite time trajectory satisfies the MITL, when the output signal of the generator automaton (Fig. \\ref{fig:EI_GEN}) includes a pair of raising edge and falling edge verified by the checker automaton. The transition from $\\text{Chk}_2$ to $\\text{Chk}_4$ (Fig. \\ref{fig:EI_CHK}) marks the exact time when such falling edge is verified. This guarantees that the time trajectory before the synchronization is a finite time trajectory that satisfies the MITL. \n\nThe composition of IOTA based on logic trees such as that of Fig. \\ref{fig:tree} is defined similar to \\cite{Maler2006a} with some modifications to handle cases when logic nodes have two children, for example the until and conjunction operators.\n\n\\begin{df}[I\/O Composition] \\hfill \\\\\n\\textit{\nLet $\\mathcal{A}^1_1 =(\\Sigma^1_1, Q^1_1, \\Gamma^1_1, \\mathcal{C}^1_1, \\lambda^1_1, \\gamma^1_1, I^1_1, \\Delta^1_1, {q^1_1}_0, F^1_1)$, $\\mathcal{A}^1_2 =(\\Sigma^1_2, Q^1_2, \\Gamma^1_2, \\mathcal{C}^1_2, \\lambda^1_2, \\gamma^1_2, I^1_2, \\Delta^1_2, {q^1_2}_0, F^1_2)$ be the input sides of the automaton. If there is only one, then $\\mathcal{A}^1_1$ is used. Let $\\mathcal{A}^2 =(\\Sigma^2, Q^2, \\Gamma^2, \\mathcal{C}^2, \\lambda^2, \\gamma^2, I^2, \\Delta^2, q_0^2, F^2)$ be the output side of the automaton. Because of the input output relationship between them, they should satisfies the condition that $[\\Gamma^1_1, \\Gamma^1_2] = \\Sigma^2$. The composition is an new IOTA such that,\n\\[\n\\mathcal{A} =(\\mathcal{A}^1_1, \\mathcal{A}^1_2) \\otimes (\\mathcal{A}^2) = ([\\Sigma_1^1,\\Sigma_2^1], Q, \\Gamma^2, \\mathcal{C}, \\lambda, \\gamma, I, \\Delta, q_0, F)\n\\]\nwhere\n\\begin{align*}\nQ=&\\{(q^1_1,q^1_2,q^2) \\in Q^1_1 \\times Q^1_2 \\times Q^2, \\\\\n& s.t. (\\gamma_1^1(q_1^1),\\gamma_2^1(q_2^1)) = \\lambda^2(q^2)\\}\n\\end{align*}\n$\\mathcal{C} = (\\mathcal{C}^1_1 \\cup \\mathcal{C}^1_2 \\cup \\mathcal{C}^2)$, $\\lambda (q^1_1,q^1_2,q^2) = [\\lambda^1_1(q^1_1),\\lambda^1_2(q^1_2)]$, $I_{(q^1_1,q^1_2,q^2)} = I^1_{(q^1_1,q^1_2)} \\cap I^2_{q^2}$, $q_0 = ({q^1_1}_0, {q^1_2}_0, q_0^2)$ and $F = F^1_1 \\cap F^1_2 \\cup F^2$.\n}\n\\end{df}\n\n\\begin{figure*}%\n\\centering\n\\includegraphics[width=6.6in]{until}%\n\\caption{The Resulting timed automaton in UPPAAL of $\\phi_1$. The purple colored texts under the state names represent $I$. The green colored texts along the edges represent guard conditions, while the blue ones represent clock resets. The B\\\"{u}chi accepting states are represented by a subscript b in state names. }%\n\\label{fig:until}%\n\\end{figure*}\n\n\\subsection{Path Synthesis using UPPAAL}\nThe overall path synthesis framework is summarized as following,\n\\begin{itemize}\n\t\\item First, the robot and the environments are abstracted to a timed automaton (TA) $\\mathcal{T}_\\text{map}$ using cell decomposition, and the time to navigate from one cell to another is estimated based on the robot's dynamics. For example Fig. \\ref{fig:map}.\n\t\\item Second, MITL formula is translated to IOTA $\\mathcal{A}$ using method described in previous section.\n\t\\item IOTA $\\mathcal{A}$ is then taken product with the TA $\\mathcal{T}_\\text{map}$ using the location label. For instance $pos1:B$ in Fig. \\ref{fig:map} will be taken product with all states in IOTA that do not satisfy the predicate $p(a)$ but satisfies $p(b)$.\n\t\\item The resulting timed automata are then automatically transformed to an UPPAAL \\cite{uppaal} model with additional satisfaction condition verifier. An initial state is chosen so that the output at that state is 1. Any finite trajectory which initiated from that state and satisfying the following conditions will satisfy the MITL specification. Firstly, it has to visit at least one of the B\\\"{u}chi accepting states, and secondly, it has to meet the acceptance condition for the timed eventually operator. To perform such a search in UPPAAL, a final state is added to allow transitions from any B\\\"{u}chi accepting state to itself. A verification automaton is created to check the finite acceptance conditions for every timed eventually operator.\n\t\\item An optimal timed path is then synthesized using the UPPAAL verification tool.\n\\end{itemize}\nThe implementation of the first and the second step is based on parsing and simplification functions of ltl2ba tool \\cite{gastin_fast_2001} with additional capabilities to generate IOTA. We then use the generated IOTA to autogenerate a python script which constructs the UPPAAL model automatically through PyUPPAAL, a python interface to create and layout models for UPPAAL. The complete set of tools\\footnote{The tool is available on \\url{https:\/\/github.com\/yzh89\/MITL2Timed}} is implemented in C to optimize speed.\n\n\\section{Case Study and Discussion} \\label{sec:case}\nWe demonstrate our framework for a simple environment and for some typical temporal logic formulas. Although our tool is not limited by the complexity of the environment, we use a simple environment to make the resulting timed automaton easy to visualize. Let us consider the timed automaton from the abstraction in Fig. \\ref{fig:map} and the LTL formula is given as the following,\n\\[\\phi_1 = (\\neg A \\mathbf{U} B) \\wedge (\\Diamond A).\n\\]\nThis specification requires the robot to visit the area $B$ first and eventually visit $A$ also. The resulting automaton based on the methods in the previous section is as shown in Fig. \\ref{fig:until}. Each state corresponds to a product state between a state in $\\mathcal{T}_\\text{map}$ and a state in IOTA $\\mathcal{A}$. The B\\\"{u}chi accepting states are indicated by an additional \\textit{b} in their state names. We obtained the optimal path by first adding a final state and linking every accepting states to it, and then using UPPAAL to find one of the shortest path that satisfies condition ``$E<> final$''. UPPAAL will then compute one fastest path in the timed automaton that goes to final state, if one such exists. If such exists, this feasible path is a finite trajectory that satisfies the specification. \nIn this paper, we are more interested in planning a path that satisfies MITL, so finite time trajectory is a valid solution.\nThe initial states of the automaton is loc0 which is the only state at pos0 that outputs 1. The optimal trajectory is $loc0 \\rightarrow loc2 \\rightarrow loc7 \\rightarrow loc6_b$, in the product automaton. This trajectory means that the optimal way for a robot to satisfy the LTL is to traverse the map in the following order, $pos0 \\rightarrow pos1:B \\rightarrow pos0 \\rightarrow pos3:A$.\n\n\\begin{figure*}%\n\\includegraphics[width=7in]{always_eventually_I}%\n\\caption{This shows one of the resulting timed automata in UPPAAL of $\\phi_2$ corresponding to the checker of timed eventually operator and untimed always. Some of the edges are further annotated by synchronization signal (ch!).}%\n\\label{fig:always_event_I}%\n\\end{figure*}\n\n\\begin{figure*}[!tb]\n \\centering{\\subfloat[]{\\includegraphics[width=3.5in]{always_eventually_I_0}\n\t\t\\label{fig:always_event_I1}}\n\t\t\\hfil\n\t\t\\subfloat[]{\\includegraphics[width=1.5in]{always_eventually_I_1}\n\t\t\\label{fig:always_event_I2}}}\n\t\t\\caption{Fig. (a) shows the other timed automata of $\\phi_2$ corresponding to the generator of timed eventually. Fig. (b) shows the verification timed automaton, that checks if the falling edge of generator is ever detected, i.e. if a synchronization signal (ch!) has happened. This signal marks the end of a full eventually cycle. Similar to the LTL case, we ask UPPAAL to check for us the following property, if there is a trajectory that leads to the final states in (a) and (b). The optimal path in this case is $(loc19,loc28)\\rightarrow (loc3,loc28) \\rightarrow (loc3,loc22_b)$. The states are products of states of Fig. \\ref{fig:always_event_I} and Fig. (a). This path corresponds to $(pos0,t\\in[0,1])\\rightarrow (pos3:A,t\\in[1,2]) \\rightarrow (pos0,t\\in[2,3])$ in physical space. Repeating this path will satisfy $\\phi_2$. }\n\\label{fig:always_event_I_2}\n\\end{figure*}\n\nIn the second test case, the environment stays the same and the requirement is captured in a MITL formula $\\phi_2$\n\\[\n\\phi_2 = \\Box \\Diamond_{[0,2]} A\n\\]\nThis requires the robot to perform periodic survey of area \\textit{A} every 2s. The resulting timed automata are shown in Fig. \\ref{fig:always_event_I} and Fig. \\ref{fig:always_event_I_2}. As we discussed earlier, if a synchronization signal (ch!) is sent, the falling edge for output of generator automaton is detected and verified. This marks the end of a finite trajectory that satisfies the MITL constraints. We used the automaton in Fig. \\ref{fig:always_event_I_2} (b) to receive such signal. Similar to the LTL case, we ask UPPAAL to find a fastest path that leads to the final states in Fig. \\ref{fig:always_event_I_2}(a) and \\ref{fig:always_event_I_2}(b) if such exists.\n\nThe optimal trajectory in this case is $(loc19,loc28)\\rightarrow (loc3,loc28) \\rightarrow (loc3,loc22_b)$, which corresponds to $(pos0,t\\in[0,1])\\rightarrow (pos3:A,t\\in[1,2]) \\rightarrow (pos0,t\\in[2,3])$. Then this trajectory repeats itself.\n\nAll the computations are done on a computer with 3.4GHz processor and 8GB memory. Both of the previous examples require very small amount of time $(<0.03s)$. We also tested our implementation against various other complex environments and MITL formulas. The Table \\ref{MITLTime} summarizes our results for complex systems and formulas. The map we demonstrated earlier is a 2x2 map (Fig. \\ref{fig:map}), we also examine the cases for 4x4 and 8x8 grid maps. The used temporal logic formulas are listed below. The time intervals in the formula is scaled accordingly to the map size.\n\\[\n\\phi_3 = \\Diamond_{[0,4]} A \\wedge \\Diamond_{[0,4]} B\n\\]\n\\[\\phi_4 = \\Diamond_{[2,4]} A \\wedge \\Diamond_{[0,2]} B\\]\n\n\\begin{table}\n\\begin{center}\n\\caption{Computation Time for typical MITL formula}\n\\label{MITLTime}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\nMITL& Map & Transformation & Num of Timed & Synthesis\\\\\nFormula& Grid &Time & Automata Transitions & Time \\\\\\hline\n$\\phi_1$ &2x2 & $<0.001s$ & 22 & 0.016s\\\\ \\hline\n$\\phi_2$ &2x2 & $0.004s$ & 69 & 0.018s\\\\ \\hline\n$\\phi_3$ &2x2 & $0.40s$ & 532 & 0.10s\\\\ \\hline\n$\\phi_4$ &2x2 & $0.46s$ & 681 & 0.12s\\\\ \\hline\n$\\phi_1$ &4x4 & $0.004s$ & 181 & 0.062s\\\\ \\hline\n$\\phi_1$ &8x8 & $0.015s$ & 886 & 0.21s\\\\ \\hline\n$\\phi_2$ &8x8 & $0.015s$ & 1795 & 0.32s\\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nIt can be seen from the Table \\ref{MITLTime} that our algorithm works very well with common MITL formulas and scales satisfactorily with the dimensions of the map.\n\n\\section{Continuous Trajectory generation}\\label{sec:continuous} \n\nIn this section, we briefly talk about generating a continuous trajectory from the discrete motion plan obtained from the timed automaton. Let us consider the nonholonomic dynamics of a unicycle car as given in (\\ref{eq:dynamics}).\n\n\\begin{equation} \\label{eq:dynamics}\n \\dot{\\begin{bmatrix}\nx\\\\y\\\\ \\theta\n\\end{bmatrix}} =u\\begin{bmatrix}\n\\cos\\theta \\\\ \\sin\\theta \\\\0\n\\end{bmatrix}+ \\omega \\begin{bmatrix}\n0\\\\0\\\\1\n\\end{bmatrix}\n\\end{equation}\nwhere $\\omega$ and $u$ are the control inputs. It should be noted that the above nonholonomic dynamics is controllable and we assume no constraints on the control inputs at this point. \nThe above sections provide the sequence of cells to be visited in the grid like environment (Fig. \\ref{fig:traj}). \n\\begin{figure}\n\\centering\n\\includegraphics[width=3in]{Continuous_trajectory.png} \n\\caption{Workspace and the continuous trajectory for the specification $\\phi_1$. The initial location is the top-left corner cell (I).}\n\\label{fig:traj}\n\\end{figure}\n\nThe output of the timed automaton are treated as the time-stamped way points for the robot to move. We have to assure that the robot moves from one way point to the next with the given initial and final time and at the same time, the trajectory should remain within the associated cells. \n\nSince our environment is decomposed in rectangular cells, the robot will only move forward, turn right, turn left and make a U-turn. We synthesize a controller that can make the robot to perform these elementary motion segments within the given time.\n\n For moving forward the input $\\omega$ is chosen to be $0$ and the velocity $u$ is tuned so that the robot reaches the final position in time. For turning left and turning right $\\omega$ is chosen to take positive and negative values respectively so that a circular arc is traversed. Similarly the U-turn is also implemented so that the robot performs the U-turn within a single cell. \n \n Let us denote the state of the system at time $t$ by the pair $(q,t)$ i.e. $x(t)=q_1,~y(t)=q_2$ and $\\theta(t)=q_3$ where $q=[q_1,~q_2,~q_3]$. Then we have the following lemma on the optimality of the \n control inputs.\n \n \\begin{lm} \\label{lem:opt}\n \\textit{ If $\\bar u(t)$ and $\\bar \\omega (t)$, $t \\in [0,1]$ is a pair of control inputs s.t. the dynamics moves from the state $(q_0,0)$ to $(q_1,1)$, then $u(t_0+t)=\\frac1\\lambda \\bar u(\n \\frac t\\lambda)$ and $\\omega(t_0+t)=\\frac1\\lambda \\bar \\omega( \\frac t\\lambda)$ move the system from $(q_0,t_0)$ to $(q_1,t_0+\\lambda)$ for any $\\lambda >0$. \\\\\n Moreover, if $\\bar u$ and $\\bar \\omega$ move the system optimally, i.e.\n \\begin{equation}\n J(\\bar u,\\bar \\omega)=\\min_{u(\\cdot),w(\\cdot)} \\int_{0}^{1} [r_1 u^2(t)+r_2 w^2(t)]dt\n \\end{equation}\n then $u$ and $\\omega$ given above are also optimal for moving the system from $(q_0,t_0)$ to $(q_1, t_0+\\lambda)$, i.e.\n \\begin{equation}\n J_1(u,\\omega)=\\min_{u_1(\\cdot),w_1(\\cdot)} \\int_{t_0}^{t_0+\\lambda}[r_1 u^2_1(t)+r_2 w^2_1(t)]dt. \n \\end{equation}\n}\n \\end{lm}\n\n\\begin{proof}\nLet us first denote \n\\[\nG(q)=\\begin{bmatrix}\n \\cos(\\theta(t)) && 0\\\\ \\sin(\\theta(t)) && 0\\\\0 && 1\n \\end{bmatrix}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\\]\n where $q=[x(t), y(t),\\theta(t)]$. Therefore, dynamics (\\ref{eq:dynamics}) can be written as $\\dot q=G(q) \\begin{bmatrix}\n u \\\\\\omega\n \\end{bmatrix}.\n$\nLet us now consider $\\bar q(t)=[x(t_0+\\lambda t),~y(t_0+\\lambda t),~\\theta(t_0+\\lambda t)]$. \nTherefore, $\\dot{\\bar q} =\\lambda G(\\bar q)\\begin{bmatrix} u(t_0+\\lambda t) \\\\ \\omega (t_0+\\lambda t) \\end{bmatrix}$. \nUsing the definition of $u$ and $\\omega$ in the lemma, we get $\\dot{\\bar q}=G(\\bar q)\n\\begin{bmatrix}\n\\bar u \\\\\\bar \\omega\n\\end{bmatrix}$\nBy the hypothesis of the lemma, $\\bar{u}$ and $\\bar \\omega$ move the system from $(q_0,0)$ to $(q_1,1)$ i.e. from $[x(t_0),y(t_0),\\theta(t_0)]=q_0$ to $q_1=\\bar q(1)=[x(t_0+\\lambda),y(t_0+\\lambda),\\theta(t_0+\\lambda)]$.\\\\\n\nFor optimality, let the proposed $u,~\\omega$ be not optimal and $u^*$ and $\\omega^*$ are optimal ones i.e. \n\\begin{equation}\n\\int_{t_0}^{t_0+\\lambda}[r_1{u^*}^2(t)+r_2{\\omega^*(t)}^2]dt \\le \\int_{t_0}^{t_0+\\lambda}[r_1u^2(t)+r_2\\omega^2(t)]dt\n\\label{eq:opt1}\n\\end{equation}\nNow let us construct $\\bar u^*(t)=\\lambda u^*(t_0+\\lambda t)$ and $\\bar \\omega^*(t)=\\lambda \\omega^*(t_0+\\lambda t)$.\n\nTherefore from (\\ref{eq:opt1}),\n\\begin{multline*}\n\\int_{0}^{1}[r_1{u^*}^2(t_0+\\lambda s)+r_2{\\omega^*}^2(t_0 +\\lambda s)]ds \\\\\n\\leq \\int_{0}^{1}[r_1u^2(t_0+\\lambda s)+r_2\\omega^2(t_0+\\lambda s)]ds\n\\end{multline*}\n\n\\begin{equation} \\int_{0}^{1}[r_1 {\\bar {u^*}}^2(s)+r_2{\\bar {\\omega^*}}^2(s)]ds \\le \\int_{0}^{1}[r_1{\\bar u}^2(s)+r_2\\bar \\omega^2(s)]ds \\label{Eqn:ineq1} \\end{equation}\n\nBut, by the hypothesis, $\\bar u$ and $\\bar \\omega$ are optimal and hence \n\\begin{equation} \\label{Eqn:ineq2}\n \\int_{0}^{1}[r_1 {\\bar {u^*}}^2(s)+r_2{\\bar {\\omega^*}}^2(s)]ds \\ge \\int_{0}^{1}[r_1{\\bar u}^2(s)+r_2\\bar \\omega^2(s)]ds\n\\end{equation}\nCombining (\\ref{Eqn:ineq1}) and (\\ref{Eqn:ineq2}) we get,\n\\begin{equation}\n \\int_{0}^{1}[r_1 {\\bar {u^*}}^2(s)+r_2{\\bar {\\omega^*}}^2(s)]ds = \\int_{0}^{1}[r_1{\\bar u}^2(s)+r_2\\bar \\omega^2(s)]ds\n\\end{equation}\n\nAfter changing the dummy variables inside integration again, one can obtain \n\n\\begin{equation}\n \\int_{t_0}^{t_0+\\lambda}[r_1 { {u^*}}^2(s)+r_2{{\\omega^*}}^2(s)]ds = \\int_{t_0}^{t_0+\\lambda}[r_1{u}^2(s)+r_2\\omega^2(s)]ds\n\\end{equation}\n\nHence the proposed $u$ and $\\omega$ are optimal whenever $\\bar u$ and $\\bar \\omega$ are optimal.\n \\end{proof}\n \\begin{rem}\n Lemma \\ref{lem:opt} states that if the controls for elementary motions from initial time $0$ to final time $1$ are synthesized, then by properly scaling and shifting in the time and scaling the magnitude, controls for any movement from any initial time to any final time can be synthesized without further solving any optimization problem.\n \\end{rem}\n\n\n\\section{Conclusion} \\label{sec:conclusion}\nIn this paper, we have presented a timed automaton based approach to generate a discrete plan for the robot to perform temporal tasks with finite time constraints. We implemented the algorithm in an efficient and generic way so that it can translate the time constraints and temporal specifications to timed automaton models in UPPAAL and synthesize the path accordingly. We then demonstrated our algorithm in grid type environments with different MITL formulas. We have considered grid type of environment for our case studies, but it can be generalized to most of the motion planning problems when the environment can be decomposed into cells. We also provide a brief overview of how an optimal continuous trajectory can be generated from the discrete plan. For future works, we are considering to extend the work to include dynamic obstacles as well as for multiagent system. \n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Swift detection}\n\nA bright long-soft GRB\\,060117 was detected by \\swift\\\/ satellite on\nJanuary 17, 2006, at 6:50:01.6\\,UT.\nIt showed a multi-peak structure with T$_\\mathrm{90}$=$16\\pm1$\\,s with\nmaximum peak flux $48.9\\pm1.6\\,{\\rm ph\\,cm}^{\\rm -2} {\\rm s}^{\\rm -1}$. Thus, \nGRB060117 was --- in terms of peak flux --- the most intense GRB detected so far by Swift\nCoordinates computed by \\swift\\\/ were available within 19\\,s and\nimmediately distributed by GCN \\cite{gcn4538}.\n\n\\section{FRAM and optical transient observation}\n\nFRAM is part of the Pierre Auger cosmic-ray observatory \\cite{auger},\nand its main purpose is to immediately monitor the atmospheric\ntransmission. \nFRAM works as an independent, RTS2-driven, fully robotic\nsystem, and it performs a photometric calibration of the sky on various\nUV-to-optical wavelengths using a 0.2\\,m telescope and a photoelectric\nphotomultiplier. \n\nFRAM received the notice at 06:50:20.8\\,UT, 19.2\\,s after the trigger\nand immediately started the slew. \nThe first exposure started at 06:52:05.4, 123.8\\,s after the GRB. \nEight images with different exposures were taken before the observation\nwas terminated. \nA bright, rapidly decaying object was found, and its presence was\nreported by \\cite{gcn4535} soon after the discovery.\nThe FRAM lightcurve for this optical transient is in Figure 1.\n\n\\begin{figure}[t!] \n \\begin{center} \\label{fig4}\n \\resizebox{0.4\\hsize}{!}{\n\t\\includegraphics{jelinekm_fig1.eps}\n\t} \n \\caption{\\small The R-band afterglow lightcurve of GRB\\,060117. \n The lightcurve is fitted as a superposition of reverse shock\n (dotted line) and forward shock (dashed line).\n\t}\n \\end{center} \n\\end{figure}\n\n\\section{Interpretation}\n\nOur preffered interpretation (based on the work of \\cite{shao05}) is to fit the \ndata as a \ntransition between the reverse and the forward shock with the passage of\nthe typical frequency break $\\nu_m$ through the observed passband at\ntime $t_{m,f}$. \nCorresponding decay indices are $\\alpha_{\\mathrm Reverse}$=2.49$\\pm$0.05 and\n$\\alpha_{\\mathrm Forward}$=1.47$\\pm$0.03 (see Fig\\,3).\n\nOther possible interpretations and more details about FRAM telescope, data processing and other\nfollow-up attempts can be found in \\cite{aal}.\n\n\\acknowledgments\n{\\small The telescope FRAM was built and is operated under the support of the\nCzech Ministry of Education, Youth, and Sports through its grant\nprograms LA134 and LC527.\nMJ would like to thank to the Spanish Ministry of Education and Science\nfor the support via grants AP2003-1407, ESP2002-04124-C03-01, and\nAYA2004-01515 (+ FEDER funds), MP was supported by the Grant Agency of\nthe Academy of Sciences of the Czech Republic grant B300100502.}\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nA wide range of phenomena in biology and social sciences can be described by the combination of classical (local) - linear or nonlinear - \\emph{diffusion} with some \\emph{nonlocal transport} effects. Examples can be found in bacterial chemotaxis \\cite{keller_segel,painter_hillen}, animal swarming phenomena \\cite{okubo,capasso}, pedestrian movements in a dense crowd \\cite{hughes}, and more in general in socio-economical sciences \\cite{sznajd,aletti}. In a fairly general setting, a set of $N$ individuals $x_1,\\ldots,x_N$ located in a sub-region of the Euclidean space $\\mathbb R^d$ are subject to a drift which is affected by the status of each other individual. In most of the above-mentioned applications, such a ``biased drift'' can be expressed through a set of first order ordinary differential equations\n\\begin{equation}\\label{eq:intro_discrete}\n\\dot{x}_i(t)=v[(x_1(t),\\ldots,x_N(t)],\\qquad i=1,\\ldots,N,\n\\end{equation}\nin which the velocity law $v$ is known. Having in mind a particle system obeying the laws of classical mechanics or electromagnetism, the set of equations \\eqref{eq:intro_discrete} is quite unconventional due to the absence of inertia. On the other hand, this choice is very common in the modelling of socio-biological systems, mainly due to the following three reasons. \n\\begin{itemize}\n\\item Inertial effects are negligible in many socio-biological aggregation phenomena. Even in cases in which the system is appropriate for a fluid-dynamical description, a `thinking fluid' model, with a velocity field already adjusted to equilibrium conditions, is often preferable compared to a second order approach. The typical examples are in traffic flow and pedestrian flow modelling. Moreover, it is well known in the context of cells aggregation modelling that the time of response to the chemoattractant signal is, most of the times, negligible. Finally, inertia is almost irrelevant in many contexts of socio-economical sciences, such as opinion formation dynamics.\n\\item First-order modelling turns out to simulate real patterns in concrete relevant situations arising in traffic flow, pedestrian motion, and cell-aggregation, and such an achievement is satisfactory in many situations, in applied fields often lacking a unified rigorous modelling approach.\n\\item In several practical problem such as the behaviour of a crowd in a panic situation, the model can be seen as the outcome of an optimization process performed externally, in which the \"best strategy\" needed to solve the problem under study (reaching the exit in the shortest possible time, in the crowd example) is transmitted to the individuals in real time (e.g. a set of ``dynamic\" evacuation signals in a smart building).\n\\end{itemize}\n\nFurther to the `discrete' approach \\eqref{eq:intro_discrete}, these models are often posed in terms of a ``continuum\" PDE approach via a continuity equation\n\\begin{equation}\\label{eq:intro_continuum}\n\\partial_t \\rho + \\mathrm{div}(\\rho v[\\rho]) = 0,\n\\end{equation}\nin which $\\rho(\\cdot,t)$ is a time dependent probability measure expressing the distribution of individuals on a given region at a given time, and in which the continuum velocity map $v=v[\\rho]$ is detected as a reasonable ``cross-grained\" version of its discrete counterpart in \\eqref{eq:intro_discrete}. The modelling of biological movements and socio-economical dynamics are often simulated at the continuum level as the PDE approach is more easy-to-handle in order to analyse the qualitative behaviour of the whole system, in the form e.g. of the emergence of a specific pattern, or the occurrence of concentration phenomena, or the formation of shock waves or travelling waves. In this regard, the descriptive power of the qualitative properties of the solutions in the continuum setting is an argument in favour of the PDE approach \\eqref{eq:intro_continuum}. On the other hand, the intrinsic discrete nature of the applied target situations under study would rather suggest an `individual based' description as the most natural one. For this reason, the justification of continuum models \\eqref{eq:intro_continuum} as a \\emph{many particle limits} of \\eqref{eq:intro_discrete} in this context is an essential requirement to validate the use of PDE models.\n\nAs briefly mentioned above, the velocity law $v=v[\\rho]$ in the PDE approach \\eqref{eq:intro_continuum} may include several effects ranging from diffusion effect to external force fields, from nonlinear convection effects to nonlocal interaction terms. We produce here a non-exhaustive list of results available in the literature in which the continuum PDE \\eqref{eq:intro_continuum} is obtained as a limit of a system of interacting particles, with a special focus on \\emph{deterministic} particle limits, i.e. in which particles move according to a system of ordinary differential equations (i.e. without any stochastic term). The presence of a diffusion operator has several possible counterparts at the discrete level. The literature on this subject involving probabilistic methods is extremely rich and, by now, well established, see e.g. \\cite{varadhan1,varadhan2,presutti} only to mention a few. A first attempt (mainly numerical) to a fully deterministic approach to diffusion equations is due to \\cite{russo}, see \\cite{gosse} for the case of nonlinear diffusion. \n\nWithout diffusion and with only a local dependency $v=v(\\rho)$, an extensive literature has been produced based on probabilistic methods (exclusion processes), see e.g. \\cite{Ferrari,Ferrari-Nejjar}. A first rigorous result based on fully deterministic ODEs at the microscopic level for a nonlinear conservation law was recently obtained in \\cite{DiFra-Rosini}. Nonlocal velocities $v=W\\ast \\rho$ have been considered as a special case of the theory developed in \\cite{carrillo_choi_hauray}, with $W$ a given kernel (possibly singular) using techniques coming from kinetic equations, see \\cite{hauray_jabin}. In all the above mentioned results, the particle system is obtained as a discretised version of the Lagrangian formulation of the system.\n\nA slightly more difficult class of problems is the one in which the velocity $v=v[\\rho]$ depends \\emph{both locally and non-locally} from $\\rho$.\nSeveral results about the mathematical well-posedness of such models are available in the literature, which use either classical nonlinear analysis techniques or numerical schemes. In the paper \\cite{colombo} a similar model is studied in the context of pedestrian movements, and the existence and uniqueness of entropy solutions is proven. We also mention \\cite{defilippis_goatin}, which covers a more general class of problems, and \\cite{amadori_shen} covering a similar model in the context of granular media. A quite general result was obtained in \\cite{piccoli_rossi} in which the velocity map $\\rho\\mapsto v[\\rho]$ is required to be Lipschitz continuous as a map from the space of probability measures (equipped with some $p$-Wasserstein distance) with values in $C(\\mathbb R^d)$, and the authors prove convergence of a time-discretised Lagrangian scheme. We also mention \\cite{betancourt}, in which a special class of local-nonlocal dependencies has been considered, however in a different numerical framework. We also recall at this stage the related results in \\cite{Bu_DiF_Dol,Bu_Dol_Sch} on the overcrowding preventing version of the Keller-Segel system for chemotaxis, in which the existence and uniqueness of entropy solutions is proven. To our knowledge, no papers in the literature provide (so far) a rigorous result of convergence of a deterministic particle system of the form \\eqref{eq:intro_discrete} towards a PDE of the form \\eqref{eq:intro_continuum} in the case of local-nonlocal dependence $v=v[\\rho]$. Indeed, the result in \\cite{piccoli_rossi} does not apply to this case in view of the Lipschitz continuity assumption on the velocity field, see also a similar result in \\cite{goatin_rossi}.\n\nIn this paper we aim at providing, for the first time, a rigorous deterministic many-particle limit for the one-dimensional \\emph{nonlocal interaction equation} with \\emph{nonlinear mobility}\n\\begin{equation}\\label{eq:intro_PDE}\n \\partial_t \\rho -\\partial_x (\\rho v(\\rho) K\\ast \\rho) = 0,\n\\end{equation}\nin which $v$ and $K$ satisfy the following set of assumptions:\n\\begin{itemize}\n\\item[(Av)] $v \\in C^1([0,+\\infty))$ is a decreasing function such that $v(0)=v_{max}>0$, $v(M)=0$ for some $M>0$, $v'<0$ on interval $(0,M]$, $v\\equiv 0$ on $[M,+\\infty)$.\n\\item[(AK)] $K \\in C^2(\\mathbb R)$, $K(0)=0$ (without restriction), $K(x)=K(-x)$ for all $x\\in \\mathbb R$, $K'(x)>0$ for $x>0$, $K'' \\in \\mathrm{Lip}_{loc}(\\mathbb R)$.\n\\end{itemize}\nAlso in view of the applications in mind, the unknown $\\rho=\\rho(x,t)$ in \\eqref{eq:intro_PDE} will be assumed to be non-negative throughout the whole paper. The PDE \\eqref{eq:intro_PDE} is coupled with an initial condition\n\\begin{equation}\\label{eq:intro_initial}\n \\rho(x,0)=\\bar{\\rho}(x),\\qquad \\bar{\\rho}\\in L^\\infty(\\mathbb R)\\cap BV(\\mathbb R),\\,\\,\\,0\\leq \\bar{\\rho}(x)\\leq M,\\,\\,\\,\\hbox{$\\mathrm{supp}(\\bar{\\rho})$ compact}.\n\\end{equation}\nThe constant $M$ here plays the role of a \\emph{maximal density}, which is supposed not to be exceeded by the density for all times. Clearly, the property $\\rho\\in [0,M]$ has to be proven to be invariant with respect to time. We notice that the total mass of $\\rho$ in \\eqref{eq:intro_PDE} is formally conserved. For simplicity, throughout the paper we shall set\n We set\n\\begin{equation*}\n \\| \\bar{\\rho} \\|_{L^1(\\mathbb R)}=1\\,.\n\\end{equation*}\nWe set $[\\bar{x}_{min}, \\bar{x}_{max}]$ as the closed convex hull of $\\mathrm{supp}\\bar{\\rho}$.\n\nOur goal is to approximate rigorously the solution $\\rho$ to \\eqref{eq:intro_PDE} with initial datum $\\bar{\\rho}$ via a set of moving particles. More precisely, we aim to proving that the \\emph{entropy solution} of the Cauchy problem for \\eqref{eq:intro_PDE} can be obtained as the large particle limit of a discrete Lagrangian approximation of the form \\eqref{eq:intro_discrete}. Such a Lagrangian approximation can be introduced as follows as a reasonable generalization of particle approximations considered previously in the literature in \\cite{DiFra-Rosini,DiFra-Fagioli-Rosini,DiFra-Fagioli-Rosini-Russo1,DiFra-Fagioli-Rosini-Russo2}. For a fixed integer $N$ sufficiently large, we split $[\\bar{x}_{min}, \\bar{x}_{max}]$ into $N$ intervals such that\nthe integral of the restriction of $\\bar{\\rho}$ over each interval equals $1\/N$. More precisely, we let $\\bar{x}_0= \\bar{x}_{min}$ and $\\bar{x}_N = \\bar{x}_{max}$, and define recursively the points $\\bar{x}_i$ for $i \\in \\{ 1,\\, \\ldots,\\, N-1\\}$ as\n\\begin{equation}\\label{eq:dscr_IC}\n\\bar{x}_i = \\sup \\left\\lbrace x \\in \\mathbb R : \\int_{\\bar{x}_{i-1}}^x \\bar{\\rho}(x) dx < \\frac{1}{N} \\right\\rbrace\\,.\n\\end{equation}\nIt is clear from the construction that $\\int_{\\bar{x}_{N-1}}^{\\bar{x}_N} \\bar{\\rho}(x) dx = 1\/N$ and $\\bar{x}_0 < \\bar{x}_1 < \\ldots\\, < \\bar{x}_{N-1} < \\bar{x}_N$.\nConsider then $N+1$ particles located at initial time at the positions $\\bar{x}_i$ and let them evolve accordingly to the following system ODEs\n\\begin{equation}\\label{Odes}\n\\dot{x}_i(t) = - \\frac{v(R_i(t))}{N} \\sum_{j > i} K'(x_i(t) - x_j(t)) - \\frac{v(R_{i-1}(t))}{N} \\sum_{j < i} K'(x_i(t) - x_j(t))\\,,\n\\end{equation}\nwith $i\\in\\{0,\\ldots,N\\}$, where the discrete density $R_i(t)$ is defined as follows\n\\[ R_i (t):= \\frac{1}{N(x_{i+1}(t) - x_i(t))},\\qquad i=0,\\ldots, N-1. \\]\nIn \\eqref{Odes}, each particle $x_i$ has mass $1\/N$. We are then in position to define the $N$-discrete density\n\\begin{equation}\\label{eq:discrete_density}\n\\rho^N(t,\\,x):= \\sum_{i=0}^{N-1} R^N_i (t) \\chi_{[x_i(t),\\,x_{i+1}(t))}(x).\n\\end{equation}\nWe observe that $\\rho^N(t,\\cdot)$ has total mass equal to $1$ for all times. We refer to system \\eqref{Odes} as \\emph{non-local Follow-the-leader} scheme, as in fact this system is a non-local extension of the classical Follow-the-leader scheme previously considered in the literature. More in detail, system \\eqref{Odes} is motivated as follows. The right-hand side of \\eqref{Odes} represents the velocity of each particle. Therefore, it has to be reminiscent of a discrete Lagrangian formulation of the Eulerian velocity $-v(\\rho)K'\\ast \\rho$ in the continuity equation \\eqref{eq:intro_PDE}. Now, since we are in one-space dimension, the discrete density $R_i$ is a totally reasonable replacement for the continuum density $\\rho$, except that one has to decide whether the discrete density should be constructed in a forward, backward, or centred fashion. Our choice of splitting the velocity $\\dot{x}_i$ into a backward and forward term is motivated by the sign of the nonlocal interaction $K'(x_i-x_j)$, which is concordant with the sign of $x_i - x_j$. Hence, since $K'(x)$ is negative on $x<0$, particles labelled by $x_j$ with $x_j>x_i$ yield a drift on $x_i$ oriented towards the \\emph{positive} direction. Since the role of the nonlinear mobility term $\\rho v(\\rho)$ is that of preventing overcrowding at high densities (consistently with the assumption of $v$ being monotone decreasing), such a drift term should be ``tempered\" by the position of the $(i+1)$-th particle. This motivates the use or $v(R_i)$ in the sum with $x_j>x_i$. A symmetric argument justifies the use of $v(R_{i-1})$ in the remaining part of the sum with $x_j0$, the discrete density $\\rho^N$ constructed in \\eqref{eq:discrete_density} converges almost everywhere and in $L^1([0,\\,T] \\times \\mathbb R)$ to the unique entropy solution $\\rho$ of the Cauchy problem\n\\begin{equation}\\label{CauchyProblem}\n\\left\\lbrace \\begin{array}{ll}\n\\partial_t \\rho = \\partial_x(\\rho v(\\rho) K' \\ast \\rho) &(t,\\,x) \\in (0,\\,T] \\times \\mathbb R\\,,\\\\\n\\rho(0,\\,x) = \\bar{\\rho}(x) &x \\in \\mathbb R\\,.\n\\end{array}\\right.\n\\end{equation}\n\\end{theorem}\n\nAs a by-product, the above result also imply existence of entropy solutions for \\eqref{CauchyProblem}, a task which has been touched in other papers previously \\cite{colombo,defilippis_goatin,Bu_DiF_Dol,betancourt}.\nImplicitly, our results also asserts the uniqueness of entropy solutions for \\eqref{eq:intro_PDE}, a side result that we shall prove as well in the paper, similarly to what done in \\cite{Bu_DiF_Dol,Bu_Dol_Sch}.\n\nThe need of the entropy condition to define a suitable notion of solution semigroup for \\eqref{eq:intro_PDE} is not only motivated by the possibility of proving its uniqueness. We actually prove in the paper that a mere notion of weak solution does not infer the well-posedness of the semigroup as multiple weak solution can be produced with the same initial condition. \n\nOur paper is structured as follows. In Section \\ref{sec:2} we introduce the nonlocal follow-the-leader particle scheme and prove that it satisfies a discrete maximum principle, a crucial ingredient in order to deal with the particle approximation in the sequel of the paper. In Section \\ref{sec:convergence} we prove all the estimates needed in order to detect strong $L^1$ compactness for the approximating sequence $\\rho^N$. The main ingredient of this section is the $BV$ estimate proven in Proposition \\ref{totalvariation}. We emphasize that the presence of an \\emph{attractive} interaction potential in the particle system implies most likely a \\emph{growth} w.r.t. time of the total variation. Therefore, one has to check that the blow-up in finite time of the total variation is avoided. In Section \\ref{sec:consi}, we prove that the limit of the approximating sequence is an entropy solution in the sense of Definition \\ref{solentropicadef}. This task is quite technical as it requires checking a discrete version of Kruzkov's entropy condition. In Section \\ref{sec:discussion} we provide an explicit example of non uniqueness of weak solutions, which has links with the admissibility of steady states. Finally, in Section \\ref{sec:numerics} we complement our results with numerical simulations.\n\n\\section{The non-local Follow-the-leader scheme}\\label{sec:2}\n\nIn this section we introduce and analyse in detail our approximating particle scheme \\eqref{Odes}. Here the macroscopic variable $\\rho$ does not need to be labelled by $N$, as $N$ is supposed to be fixed throughout the whole section. The regularity assumptions on $v$ and $K$ in (Av) and (AK) imply that the right-hand side of \\eqref{Odes} is locally Lipschitz with respect to the $N+1$-tuple $(x_0,x_1,\\ldots,x_N)$ as long as we can guarantee that the denominator in $R_i$ does not vanish. Such a property is a consequence of the following \\emph{Discrete Maximum Principle}, ensuring that the particles cannot touch each other at any time. This implies both the (global-in-time) existence of solutions of the system~\\eqref{Odes} for all times $t>0$, and the conservation of the initial particle ordering during the evolution.\n\n\\begin{lemma}[Discrete Maximum Principle]\\label{lem:maximum}\nLet $N\\in\\mathbb N$ be fixed and assume that (Av) and (AK) hold. In particular, let $M>0$ be as in assumption (Av). Let $\\bar{x}_0<\\bar{x}_1<\\ldots<\\bar{x}_N$ be the initial positions for \\eqref{Odes}, and assume that\n\\begin{equation}\\label{eq:MPcondition}\n\\bar{x}_{i+1}-\\bar{x}_i \\geq \\frac{1}{MN}\n\\end{equation}\nThen every solution $x_i(t)$ to the system~\\eqref{Odes} satisfies\n\\begin{equation}\\label{MaxPrinc}\n\\frac{1}{MN} \\leq x_{i+1}(t) - x_i(t) \\qquad \\hbox{for all $i \\in \\{0,\\,\\ldots,\\,N-1\\}$ and for all $t \\in [0,\\,+\\infty)$}.\n\\end{equation}\nConsequently, the unique solution $(x_0(t),\\ldots,x_N(t))$ to \\eqref{Odes} with initial condition $(\\bar{x}_0,\\ldots,\\bar{x}_N)$ exists globally in time.\n\\end{lemma}\n\n\\begin{proof}\nLet $T_{max}>0$ be the maximal existence time for \\eqref{Odes}. Due to the assumptions (Av) and (AK), the local-in-time solution $(x_0(t),\\ldots,x_N(t))$ is $C^1$ on $[0,T_{max})$. If we prove that \\eqref{MaxPrinc} holds on $[0,T_{max})$, this will automatically prove global existence by a simple continuation principle. Arguing by contradiction, assume that $t_1< T_{max}$ is the first instant where two consecutive particles are the distance $1\/MN$ and get closer afterwards, i.e.\n\\[t_1=\\inf\\{ t \\in [0,\\,T] : \\,\\,\\hbox{there exists}\\, i\\, : x_{i+1}(t) - x_i(t) = 1\/MN \\},\\]\nand there exists $t_2 \\in (t_1,\\,T]$ such that\n\\[x_{i+1}(t) - x_i(t) < \\frac{1}{MN} \\qquad \\forall t \\in (t_1,\\, t_2]\\,. \\]\nNotice that the minimality of $t_1$ ensures that all particles maintain their initial order for all $t \\in [0,\\,t_1)$. At time $t_1$ we have $R_i(t_1)=0$ due to (Av). Substituting this value in the equation \\eqref{Odes} for $x_i$, we easily see that only the terms $j0$ or some index $h\\leq i$ such that $\\dot{x}_k(t_1)<0$, otherwise any two consecutive particles would be placed at distance $1\/MN$ and the system would be static for all $t \\in (t_1,\\,T]$, which would contradict the existence of $t_2$.\n\nThe above considerations imply that we can assume, without loss of generality, that\n\\[ \\dot{x}_{i+1}(t_1) >0,\\quad \\mbox{ and } \\quad\\dot{x}_i(t_1) \\leq 0\\,. \\]\nLet $\\varepsilon_{i+1}>0$ be small enough such that $t_1+ \\varepsilon_{i+1} < t_2$, then by Taylor expansion one has\n\\[ x_{i+1}(t) = x_{i+1}(t_1) + \\dot{x}_{i+1}(t_1)(t-t_1) + o(|t-t_1|)\\,, \\]\nwhere, up to taking $\\varepsilon_{i+1}$ even smaller, the contribute $o(t-t_1)$ does not affect the sign of $\\dot{x}_{i+1}(t_1)(t-t_1)$. As a consequence, $x_{i+1}(t) > x_{i+1}(t_1)$ for all $t\\in (t_1,t_1 +\\varepsilon_{i+1})$ and a symmetric argument gives also $x_i(t) \\leq x_i(t_1)$ for all $t\\in (t_1,t_1+\\varepsilon_{i})$. In particular, we deduce that\n\\[ x_{i+1}(t) - x_i(t) \\geq x_{i+1}(t_1) - x_i(t_1) = \\frac{1}{MN} \\quad \\forall t \\in (t_1,\\, t_1 + \\min\\{\\varepsilon_{i},\\,\\varepsilon_{i+1} \\}) \\]\nand this contradicts the existence of $t_2$. This argument ensures both the validity of~\\eqref{MaxPrinc} and the existence of solutions for all times $t>0$.\n\\end{proof}\n\nLet us consider the discrete density\n\\begin{equation*}\n\\rho(t,\\,x):= \\sum_{i=0}^{N-1} R_i (t) \\chi_{[x_i(t),\\,x_{i+1}(t))}(x).\n\\end{equation*}\nA straightforward consequence of Lemma \\ref{lem:maximum} is that\n\\[\\rho(t,x)\\leq M\\qquad \\hbox{for all $(t,x)\\in [0,+\\infty)\\times\\mathbb R$}.\\]\nMoreover, we observe that $\\rho$ has unit mass on $\\mathbb R$ for all times.\n\nAs already mentioned before, a straightforward consequence of the above Maximum Principle is that the particles can never touch or cross each other. In particular, the particle $x_0$ will have no particles at its left for all times, which means that the ODE for $x_0$ will only feature terms with $j>0$ on the nonlocal sum. A symmetric statement holds for $x_N$. As a consequence of that $\\dot{x}_0(t) \\geq 0$ and $\\dot{x}_N(t) \\leq 0$ for all $t$, thus the support of $\\rho^N(t,\\,\\cdot)$ is bounded by $\\ell$ uniformly in $N$ and $t$.\nWe summarize this property in the next lemma.\n\n\\begin{lemma}\\label{lem:support}\nUnder the same assumptions of Lemma \\ref{lem:maximum}, the support of $\\rho(t,\\cdot)$ is contained in the interval $[\\bar{x}_0,\\bar{x}_N]$ for all times $t\\in [0,+\\infty)$.\n\\end{lemma}\n\n\\section{Convergence of particle scheme}\\label{sec:convergence}\n\nWe now focus on the converge of the particle scheme \\eqref{Odes}, where the initial condition \\eqref{eq:dscr_IC} is constructed from an $L^\\infty(\\mathbb R)$ initial density $\\bar\\rho$ having compact support and finite total variation.\n\nThe proof of Theorem~\\ref{main} relies on two main steps: the first one consists in proving that the discrete density $(\\rho^N)$ defined in \\eqref{eq:discrete_density} is strongly convergent (up to a subsequence) to a limit $\\rho$ in $L^1([0,T] \\times \\mathbb R)$, the second one is to show that the limit $\\rho$ is a weak entropy solution of~\\eqref{CauchyProblem} according to Definition \\ref{solentropicadef}. In this section we take care of the former step. As we will show in Propositions~\\ref{totalvariation} and~\\ref{continuitytime} below, the sequence $(\\rho_N)_{N\\in \\mathbb N}$ satisfies good compactness properties with respect to the space variables but, on the other hand, we cannot reach a uniform $L^1$ control on the time oscillations. In our case, we are only able to prove a uniform time continuity estimate with respect to the $1$-Wasserstein distance (see \\cite{villani_book}), which nevertheless will suffice to achieve the required compactness in the product space. Such a strategy recalls the one used in \\cite{DiFra-Rosini} for the case of a scalar conservation law. The main result of this section is the content of the following\n\n\\begin{theorem}\\label{convergence}\nUnder the assumptions of Theorem~\\ref{main}, the sequence $\\rho^N$ is strongly relatively compact in $L^1([0,T] \\times \\mathbb R)$\n\\end{theorem}\n\nThe proof of Theorem~\\ref{convergence} relies on a generalized statement of the celebrated Aubin-Lions Lemma (see \\cite{RS,DiFra-Matthes,DiFra-Fagioli-Rosini}) that we recall here for the reader's convenience. In what follows, $d_1$ is the $1$-Wasserstein distance.\n\n\\begin{theorem}[Generalized Aubin-Lions Lemma]\\label{thm:aubin}\nLet $\\tau>0$ be fixed. Let $\\eta^N$ be a sequence in $L^{\\infty}((0,\\,\\tau); L^1(\\mathbb R))$ such that\n$\\eta^N(t,\\,\\cdot) \\geq 0$ and $\\| \\eta^N(t,\\,\\cdot) \\|_{L^1(\\mathbb R)}=1$ for every $N\\in\\mathbb N$ and $t\\in [0,\\,\\tau]$.\nIf the following conditions hold\n\\begin{enumerate}\n\\item[I)] $\\sup_{N} \\int_0^{\\tau} \\left[\\|\\eta^N(t,\\,\\cdot)\\|_{L^1(\\mathbb R)}dt + TV\\big[ \\eta^N(t,\\,\\cdot)\\big]+ \\mathrm{meas}(\\mathrm{supp}[\\eta^N(t,\\cdot)])\\right]dt < \\infty$,\n\\item[II)] there exists a constant $C>0$ independent from $N$ such that $d_{W^1}\\big( \\eta^N(t,\\,\\cdot), \\eta^N(s,\\,\\cdot) \\big) < C |t-s|$ for all $s,\\,t \\in (0,\\,\\tau)$,\n\\end{enumerate}\nthen $\\eta^N$ is strongly relatively compact in $L^1([0,\\,\\tau]\\times \\mathbb R)$.\n\\end{theorem}\n\nIn view of Theorem \\ref{thm:aubin}, the result in Theorem \\ref{convergence} will follow as a consequence of the following two propositions.\n\n\\begin{prop}\\label{totalvariation}\nLet $\\bar{\\rho},\\,v,\\,K$ and $T$ be as in the statement of Theorem~\\ref{main}. Then, there exists a positive constant $C>0$ (only depending on $K$, $v$, and on $\\mathrm{supp}(\\bar\\rho)$) such that for every $N \\in \\mathbb N$ one has\n\\begin{equation}\\label{TV}\nTV[\\rho^N(t,\\,\\cdot)] \\leq TV[\\bar{\\rho}] e^{Ct} \\qquad \\hbox{for all $t \\in [0,T]$}\\,.\n\\end{equation}\n\\end{prop}\n\n\\begin{prop}\\label{continuitytime}\nLet $\\bar{\\rho},\\,v,\\,K$ and $T$ be as in the statement of Theorem~\\ref{main}. Then, there exists a positive constant $C>0$ (only depending on $K$) such that\n\\begin{equation}\\label{contime}\nd_{W^1}\\big( \\rho^N(t,\\,\\cdot), \\rho^N(s,\\,\\cdot) \\big) < C |t-s| \\quad \\hbox{for all $s,\\,t \\in (0,\\,T)$, and for all $N \\in \\mathbb N$}\\,.\n\\end{equation}\n\\end{prop}\n\n\nThe remaining part of this section is devoted to prove Propositions~\\ref{totalvariation} and~\\ref{continuitytime}.\nFor future use we compute\n\\begin{align}\n\\dot{R}_i(t) =& -N(R_i)^2 (\\dot{x}_{i+1} - \\dot{x}_i) = -N(R_i)^2 \\Big[ -2v(R_i)\\frac{1}{N}K'(x_{i+1} - x_i) \\nonumber \\\\\n& -(v(R_{i+1}) - v(R_i))\\frac{1}{N}\\sum_{j > i+1} K'(x_{i+1} - x_j)\\nonumber\\\\\n& - v(R_i) \\frac{1}{N}\\sum_{j > i+1}\\big( K'(x_{i+1} - x_j) - K'(x_i - x_j) \\big) \\nonumber\\\\\n& -(v(R_i) - v(R_{i-1}))\\frac{1}{N}\\sum_{j0$. The total variation of $\\rho^N$ at time $t$ is given by\n\\begin{align*}\nT&V[\\rho^N(t,\\,\\cdot)] = R_0(t) + R_{N}(t) + \\sum_{i=0}^{N-1} |R_{i+1}(t) - R_i(t)| \\\\\n&=\\sum_{i=1}^{N-1}R_i [\\mathrm{sign}(R_i - R_{i-1}) - \\mathrm{sign}(R_{i+1}-R_i)] - R_0 (\\mathrm{sign}(R_1-R_0) -1)\\\\\n&\\, + R_N( \\mathrm{sign}(R_N - R_{N-1}) + 1) \\\\\n&=\\mu_0(t)R_0(t)+\\mu_N(t)R_N + \\sum_{i=1}^{N-1}R_i \\mu_i,\n\\end{align*}\nwhere we set for brevity\n\\begin{align*}\n & \\mu_i(t):=\\mathrm{sign}(R_i(t) - R_{i+1}(t)) - \\mathrm{sign}(R_{i-1}(t) - R_i(t))\\qquad i=1,\\ldots,N-1,\\\\\n & \\mu_0(t)= \\big( 1- \\mathrm{sign}(R_1 -R_0) \\big),\\\\\n & \\mu_N(t)=\\big(1+ \\mathrm{sign}(R_N -R_{N-1}) \\big).\n\\end{align*}\nThen we can compute\n\\begin{align*}\n \\frac{d}{dt} TV[\\rho^N(t,\\,\\cdot)] &= \\dot{R}_0(t) +\\dot{R}_{N}(t) + \\sum_{i=0}^{N-1} \\mathrm{sign}\\big(R_{i+1}(t) - R_i(t)\\big)\\big( \\dot{R}_{i+1}(t) - \\dot{R}_i(t) \\big) \\\\\n&= \\mu_0(t)\\dot{R}_0(t) + \\mu_N(t) \\dot{R}_N(t) + \\sum_{i=1}^{N-1} \\mu(R_i(t))\\dot{R}_i(t)\\,.\n\\end{align*}\nThe value of the coefficient $\\mu_i(t)$ clearly depends on the positions of the consecutive particles, it is easy to see that for $i \\in \\{ 1,\\,\\ldots,\\,N-1\\}$\n\\begin{equation*}\n\\mu_i(t)= \\left\\lbrace\\begin{array}{lll}\n-2 \\quad &\\mbox{if $R_{i+1} > R_i$ and $R_{i-1}> R_i$},\\\\\n2 \\quad &\\mbox{if $R_{i+1} < R_i$ and $R_{i-1}< R_i$},\\\\\n0 \\quad &\\mbox{if $R_{i+1} \\geq R_i \\geq R_{i-1}$ or $R_{i-1}\\geq R_i \\geq R_{i+1}$,}\n\\end{array}\n\\right.\n\\end{equation*}\nmoreover\n\\begin{equation*}\n\\mu_0(t)=\\left\\lbrace \\begin{array}{ll}\n0 \\quad \\mbox{if $R_1 < R_0$,}\\\\\n2 \\quad \\mbox{if $R_1 > R_0$,}\n\\end{array} \\right.\n\\qquad\n\\mu_N(t)= \\left\\lbrace\\begin{array}{ll}\n0 \\quad \\mbox{if $R_{N-1} > R_N$,}\\\\\n2 \\quad \\mbox{if $R_{N-1} < R_N$.}\n\\end{array}\\right.\n\\end{equation*}\nRecalling \\eqref{eq:explcit_Rdot}, we can rewrite\n\\begin{equation}\\label{eq:ddt_estimate}\n\\frac{d}{dt} TV[\\rho^N(t,\\,\\cdot)] = \\mu_0(t)\\dot{R}_0(t) + \\mu_N(t) \\dot{R}_N(t) - \\sum_{i=1}^{N-1} \\mu_i(t)(R_i(t))^2 \\emph{I}_i - \\sum_{i=1}^{N-1} \\mu_i(t)R_i(t) \\emph{II}_i\\,,\n\\end{equation}\nwhere\n\\[ \\emph{I}_i = -\\big(v(R_{i+1}(t)) - v(R_i(t))\\big)\\sum_{j > i+1} K'(x_{i+1}(t) - x_j(t)) - \\big(v(R_i(t)) - v(R_{i-1}(t))\\big)\\sum_{j < i} K'(x_i(t) - x_j(t))\\,, \\]\nand\n\\begin{align*}\n & \\emph{II}_i = -R_i(t)v(R_i(t)) \\sum_{j \\neq i,\\,i+1} \\big( K'(x_{i+1}(t) - x_j(t)) - K'(x_i(t) - x_j(t)) \\big)\\\\\n & \\,\\, - 2R_i(t)v(R_i(t))K'(x_{i+1}(t)-x_i(t))\\,.\n\\end{align*}\nLet us first estimate $-\\sum_{i=1}^{N-1}\\mu_i(t)(R_i(t))^2\\emph{I}_i$ in \\eqref{eq:ddt_estimate}. Clearly, the only relevant contributions in the sum come from the particles $x_i$ for which $\\mu_i(t)\\neq 0$. However, if the index $i$ is such that $\\mu_i(t)=-2$, then $R_{i+1}, R_{i-1} > R_i$ and the monotonicity of $v$ implies\n\\[ v(R_{i+1}(t)) - v(R_i(t)) <0\\,, \\quad\\mbox{ and }\\quad v(R_i(t))-v(R_{i-1}(t)) >0\\,. \\]\nThe assumption (AK) on $K$ ensures that $\\emph{I}_i < 0$, thus, on the other hand, $\\mu_i(t)(R_i (t))^2 \\emph{I}_i <0$.\nAn analogous argument implies that, if $i$ such that $\\mu_i(t)=2$, then $\\emph{I}_i > 0$ and $2(R_i (t))^2 \\emph{I}_i >0$. These considerations lead immediately to\n\\begin{equation}\\label{stuck[I]}\n-\\sum_{i=1}^{N-1}\\mu_i(t)(R_i(t))^2\\emph{I}_i < 0\\,.\n\\end{equation}\nLet us now focus on $-\\sum_{i=1}^{N-1}\\mu_i(t)R_i(t)\\emph{II}_i$. In this case, we would like to obtain an upper bound in terms of $TV[\\rho^N(t,\\,\\cdot)]$ and for this purpose we need to estimate $| II_i |$. We recall that $K'$ is locally Lipschitz and that $v(\\rho)\\in [0,v_{max}]$. The former in particular implies that $K'$ has finite Lipschitz constant on the compact interval $[-2\\mathrm{meas}(\\mathrm{supp}(\\bar\\rho)),2\\mathrm{meas}(\\mathrm{supp}(\\bar\\rho))]$, we name such a constant $L=L(\\bar\\rho)$. We get\n\\begin{align*}\n|\\emph{II}_i| &= R_i(t)|v(R_i(t))| \\left| -\\sum_{j \\neq i,\\,i+1} \\big(K'(x_{i+1}(t) - x_j(t)) - K'(x_i(t) - x_j(t)) \\big) -2K'(x_{i+1}(t) - x_i(t))\\right| \\\\\n&\\leq R_i(t)\\,L\\,v_{max} \\frac{N-2}{N} \\frac{1}{R_i(t)} + 2v_{max}\\,L\\frac{1}{N} \\leq L\\,v_{max}\\,,\n\\end{align*}\nand this gives\n\\begin{equation}\\label{stuck[II]}\n\\left| -\\sum_{i=1}^{N-1}\\mu_i(t)R_i(t)\\emph{II}_i \\right| \\leq L\\,v_{max}\\left| \\sum_{i=1}^{N-1}\\mu_i(t)R_i(t) \\right| \\leq L\\,v_{max}\\,TV[\\rho^N(t,\\,\\cdot)].\n\\end{equation}\nWe can now focus on $\\dot{R}_0$ and $\\dot{R}_N$. Since the setting is symmetric, we only present the argument for $\\mu_0(t)\\dot{R}_0$ and leave the one for $\\mu_N(t)\\dot{R}_N$ to the reader. Since $\\mu_0(t)\\neq 0$ only if $R_1(t)>R_0(t)$, without restriction we can assume $(v(R_1) - v(R_0)) \\leq 0$ and can compute\n\\begin{align*}\n\\mu_0\\dot{R}_0 &= \\mu_0R_0 [R_0v(R_1)\\sum_{j>1}\\big(K'(x_1-x_j) - K'(x_0-x_j)\\big) +2R_0v(R_0)K'(x_1-x_0)] \\\\\n&\\quad + \\mu_0(R_0)^2(v(R_1) - v(R_0))\\sum_{j>1} K'(x_0 - x_j) \\\\\n&\\leq \\mu_0R_0 [R_0v(R_1)\\sum_{j>1}\\big(K'(x_1-x_j) - K'(x_0-x_j)\\big) +2R_0v(R_0)K'(x_1-x_0)]\\,.\n\\end{align*}\nMoreover,\n\\[ \\left| R_0v(R_1)\\sum_{j>1}\\big(K'(x_1-x_j) - K'(x_0-x_j)\\big) +2R_0v(R_0)K'(x_1-x_0)\\right| \\leq v_{max}\\,L\\frac{N-1}{N} + \\frac{2v_{max}\\,L}{N}\\,. \\]\nIn particular, $\\mu_0\\dot{R}_0 \\leq (3CL) R_0$ and\n\\begin{equation}\\label{primoeultimotermine}\n\\mu_0\\dot{R}_0 + \\mu(R_N)\\dot{R}_N \\leq 3v_{max}\\,L\\, (R_0 + R_N) \\leq 3v_{max}\\,L\\, TV[\\rho^N(t,\\,\\cdot)]\\,.\n\\end{equation}\nBy putting together~\\eqref{stuck[I]},~\\eqref{stuck[II]} and~\\eqref{primoeultimotermine} we get estimate~\\eqref{dTVlimitata} and~\\eqref{TV} follows as a consequence of Gronwall Lemma.\n\\end{proof}\n\nWe now prove the equi-continuity w.r.t. time with respect to the $1$-Wasserstein distance for $\\rho^N$.\n\n\\proofof{Proposition~\\ref{continuitytime}}\nAssume without loss of generality that $0< s0$ independent of $N$ such that\n\\[ \\| X_{\\rho^N(t,\\cdot)} - X_{\\rho^N(s,\\,\\cdot)} \\|_{L^1([0,1])} < C|t-s|, \\]\nfor all $s,\\,t \\in (0,T)$.\nBy the definition of $\\rho^N$ we can explicitly compute\n\\[ X_{\\rho^N(t,\\,\\cdot)}(z) = \\sum_{i=0}^{N-1} \\left(x_i^N(t) + \\left(z-i\\frac{1}{N}\\right) \\frac{1}{R_i^N(t)}\\right) \\textbf{1}_{[i\\frac{1}{N},\\,(i+1)\\frac{1}{N})}(z)\\,. \\]\nTherefore,\n\\begin{align*}\nd_{1}\\big( \\rho^N(t,\\,\\cdot), \\rho^N(s,\\,\\cdot) \\big) &= \\| X_{\\rho^N(t,\\,\\cdot)} - X_{\\rho^N(s,\\,\\cdot)} \\|_{L^1([0,\\,1])} \\\\\n&\\leq \\sum_{i=0}^{N-1} \\int_{i\/N}^{(i+1)\/N} \\left| x_i^N(t) - x_i^N(s) + \\left(z - \\frac{i}{N} \\right) \\left(\\frac{1}{R_i^N(t)} - \\frac{1}{R_i^N(s)} \\right) \\right| dz \\\\\n&\\leq \\sum_{i=0}^{N-1} \\frac{1}{N} |x_i^N(t) - x_i^N(s)| + \\sum_{i=0}^{N-1} \\left|\\frac{1}{R_i^N(t)} - \\frac{1}{R_i^N(s)} \\right| \\int_{i\/N}^{(i+1)\/N} \\left(z - \\frac{i}{N} \\right)dz \\\\\n&= \\sum_{i=0}^{N-1} \\frac{1}{N} |x_i^N(t) - x_i^N(s)| + \\sum_{i=0}^{N-1} \\frac{1}{2N^2} \\int_s^t \\left| \\frac{d}{d\\tau} \\frac{1}{R_i^N(\\tau)}\\right| d\\tau \\\\\n&\\leq 3 \\sum_{i=0}^{N} \\frac{1}{N} \\int_s^t \\left|\\dot{x}_i^N(\\tau)\\right| d\\tau\\,,\n\\end{align*}\nwhere in the last inequality we used that\n\\[ \\left| \\frac{d}{d\\tau} \\frac{1}{R_i^N(\\tau)} \\right| = N |\\dot{x}_{i+1}^N(\\tau) - \\dot{x}_i^N(\\tau)| \\leq N |\\dot{x}_{i+1}^N(\\tau)| + N|\\dot{x}_i^N(\\tau)|\\,. \\]\nNotice that we can control $|\\dot{x}_i^N(\\tau)|$ uniformly in $N$ and in $\\tau$. Indeed, recalling the assumption (AK), setting $L$ as the Lipschitz constant of $K'$ on the interval $[-2\\mathrm{meas}(\\mathrm{supp}(\\bar\\rho)),2\\mathrm{meas}(\\mathrm{supp}(\\bar\\rho))]$ as in the proof of Proposition \\ref{totalvariation}, we have\n\\[ |\\dot{x}_i^N(\\tau)| = \\frac{1}{N}\\left|-v(R_i(t)) | \\sum_{j>i} K'(x_i - x_j) - v(R_{i-1})\\sum_{j0$ depending only on $\\bar\\rho$ such that\n \\[\\sup_{t\\geq 0}\\|W\\ast \\rho^N(t,\\cdot)-W\\ast \\hat{\\rho}^N(t)\\|_{L^1}\\leq \\frac{C}{N},\\]\n for all $N\\in \\mathbb{N}$. To prove this, let $\\gamma^N_o(t)$ be an optimal plan between $\\rho^N(t,\\cdot)$ and $\\hat{\\rho}^N(t,\\cdot)$ with respect to the cost $c(x)=|x|$. We then estimate, for all $t\\geq 0$,\n \\begin{align*}\n & \\|W\\ast \\rho^N-W\\ast \\hat{\\rho}^N\\|_{L^1(\\mathbb R)} = \\int_\\mathbb R \\left|\\int_\\mathbb R W(x-y)\\, d\\rho^N(t,\\cdot)(y)-\\int_\\mathbb R W(x-y)\\,d\\hat{\\rho}^N(t)(y)\\right|\\, dx\\\\\n & \\ =\\int_\\mathbb R \\left|\\iint_{\\mathbb R^2} \\left(W(x-y)-W(x-z)\\right)\\, d\\gamma_0^N(t)(y,z)\\right|\\, dx\\\\\n & \\ \\leq C \\int_\\mathbb R \\iint_{\\mathbb R^2}|y-z|d\\gamma_0^N(t)(y,z),dx,\n \\end{align*}\n where we have used that the supports of $\\rho^N$ and $\\hat{\\rho}^N$ are contained in $\\mathrm{supp}(\\bar\\rho)$ which is bounded and independent of time. By definition of $1$-Wasserstein distance we therefore have\n \\[\\|W\\ast \\rho^N-W\\ast \\hat{\\rho}^N\\|_{L^1(\\mathbb R)} \\leq C d_1(\\rho^N(t,\\cdot),\\hat{\\rho}^N(t)) \\leq \\frac{\\tilde{C}}{N},\\]\n for some suitable constant $\\tilde{C}>0$ in view of Lemma \\ref{lem:empirical1}.}\n\\end{remark}\n\nOur next goal is to prove that the entropy inequality\n\\[0 \\leq \\int_0^T \\int_{\\mathbb R} |\\rho^N - c|\\varphi_t - \\mathrm{sign}(\\rho^N - c)[(f(\\rho^N) - f(c))K' \\star \\hat{\\rho}^N \\varphi_x - f(c)K'' \\ast \\hat{\\rho}^N \\varphi]dxdt\\]\nholds for every non-negative test function $\\varphi$ with compact support in $\\mathcal{C}^{\\infty}_c ((0,+\\infty)\\times \\mathbb R)$, every constant $c\\geq 0$ and every $N$ \\emph{large enough}. Such a goal, which requires some tedious calculations, is however not enough to prove that the limit $\\rho$ of the previous section is an entropy solution because the of the discontinuity of the sign function in the above inequality, which does not allow to pass to the limit for $\\rho^N\\rightarrow \\rho$ almost everywhere and in $L^1$. To bypass this problem we shall then introduce a $\\delta$-regularization of the sign function in order to first let $N\\rightarrow +\\infty$ and then $\\delta\\searrow 0$. In the last part of the section we prove the uniqueness of entropy solutions, which allows to conclude that the whole approximating sequence $\\rho^N$ converges to $\\rho$, thus completing the proof of our main Theorem~\\ref{main}.\n\n\n\\begin{lemma}\\label{entropiarhoN}\nFor every non negative $\\varphi \\in C^{\\infty}_c ((0,+\\infty)\\times \\mathbb R),\\,c\\geq 0$ and $N \\in \\mathbb N$ the following inequality holds\n\\begin{equation}\\label{entropiaN}\n\\liminf_{N\\rightarrow +\\infty}\\int_0^T \\int_{\\mathbb R} |\\rho^N - c| \\varphi_t - \\mathrm{sign}(\\rho^N -c)[(f(\\rho^N) - f(c))K' \\ast \\hat{\\rho}^N \\varphi_x - f(c)K'' \\ast \\hat{\\rho}^N \\varphi] dx dt \\geq 0.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nLet $T>0$ such that $\\mathrm{supp}\\varphi\\subset[0,T]\\times \\mathbb R$. The basic idea of the proof is rather simple, although the computations are quite technical: we need to rewrite the left hand side of the inequality so that it is possible to isolate a term with positive sign and then show that the remaining terms give negligible contributions as $N \\to \\infty$.\nBy definition of $\\rho^N$ and $\\hat{\\rho}^N$ we obtain\n\\[ \\int_0^T \\int_{\\mathbb R} |\\rho^N - c| \\varphi_t - \\mathrm{sign}(\\rho^N -c)[(f(\\rho^N) - f(c))K' \\ast \\varphi_x - f(c)K'' \\ast \\rho^N \\varphi] dx dt = B.T._1 + \\sum_{i=0}^{N-1} I_i + \\sum_{i=0}^{N-1} II_i, \\]\nwhere\n\\begin{align*}\n& I_i := \\int_0^T \\int_{x_i}^{x_{i+1}} |R_i^N -c| \\varphi_t\\,dxdt ,\\\\\n& II_i := - \\int_0^T \\int_{x_i}^{x_{i+1}} \\mathrm{sign}(R_i^N - c)(f(R_i^N) - f(c))K' \\ast \\hat{\\rho}^N \\varphi_x\\,dxdt \\\\\n&\\hspace*{1.2cm}+ \\int_0^T \\int_{x_i}^{x_{i+1}} f(c) \\mathrm{sign}(R_i^N - c) K'' \\ast \\hat{\\rho}^N \\varphi\\, dxdt, \\\\\n& B.T._1 := \\int_0^T \\int_{-\\infty}^{x_0} c\\varphi_t - f(c)[K' \\ast \\hat{\\rho}^N \\varphi_x + K'' \\ast \\hat{\\rho}^N \\varphi]\\,dxdt \\\\\n&\\hspace*{1.2cm}+ \\int_0^T \\int_{x_N}^{\\infty} c\\varphi_t - f(c)[K' \\ast \\hat{\\rho}^N \\varphi_x + K'' \\ast \\hat{\\rho}^N \\varphi]\\,dxdt.\n\\end{align*}\nFor simplicity of notation we set $S^N_i : = \\mathrm{sign}(R_i^N - c)$ and we omit the dependence on $N$ and $t$ wherever it is clear from the context.\nIntegrating by parts and recalling the definition of $\\hat{\\rho}^N$ and the expression for $\\dot{R}_i$, we can rewrite $I_i$ as\n\\begin{align*}\nI_i=& \\int_0^T S_i R_i (\\dot{x}_{i+1} - \\dot{x}_i) \\left(\\hbox{\\ }\\Xint{\\hbox{\\vrule height -0pt width 10pt depth 1pt}}_{x_{i}}^{x_{i+1}} \\varphi(t,x) dx - \\varphi(t,x_{i+1})\\right)dt \\\\\n&+ \\int_0^T S_i [R_i (\\dot{x}_{i+1} - \\dot{x}_i) \\varphi(t,x_{i+1}) - (R_i -c) (\\dot{x}_{i+1}\\varphi(t,x_{i+1}) - \\dot{x}_i \\varphi(t,x_i))]dt,\n\\end{align*}\nand $II_i$ as\n\\begin{align*}\nII_i =& -\\int_0^T S_i\\frac{(f(R_i) - f(c))}{N}\\sum_{j=0}^N (K'(x_{i+1} - x_j)\\varphi(t,x_{i+1}) - K'(x_i - x_j)\\varphi(t,x_i))dt \\\\\n&+ \\int_0^T S_i \\frac{f(R_i)}{N} \\sum_{j=0}^N \\int_{x_i}^{x_{i+1}} K''(x-x_j)\\varphi(t,x)dxdt\\,.\n\\end{align*}\nThen the sum $I_i + II_i$ becomes\n\\[I_i + II_i = A^1_i + A^2_i + Z_i, \\]\nwhere we set\n\\begin{align*}\nA^1_i &= \\int_0^T S_i R_i (\\dot{x}_{i+1} - \\dot{x}_i) \\left(\\hbox{\\ }\\Xint{\\hbox{\\vrule height -0pt width 10pt depth 1pt}}_{x_{i}}^{x_{i+1}} \\varphi(t,x) dx - \\varphi(t,x_{i+1})\\right)dt, \\\\\nA^2_i &= \\int_0^T S_i \\frac{f(R_i)}{N} \\sum_{j=0}^N \\int_{x_i}^{x_{i+1}} K''(x-x_j)\\varphi(t,x)dxdt,\n\\end{align*}\nand\n\\begin{align*}\nZ_i = &- \\sum_{i=0}^{N-1} \\int_0^T S_i\\varphi(t,x_{i+1})[R_i \\dot{x}_i + \\frac{f(R_i)}{N}\\sum_{j=0}^N K'(x_{i+1}-x_j)]dt \\\\\n&+ \\sum_{i=0}^{N-1} \\int_0^T S_i\\varphi(t,x_{i+1})[c \\dot{x}_{i+1} + \\frac{f(c)}{N}\\sum_{j=0}^N K'(x_{i+1}-x_j)]dt \\\\\n&+ \\sum_{i=0}^{N-1} \\int_0^T S_i\\varphi(t,x_{i})[R_i \\dot{x}_i + \\frac{f(R_i)}{N}\\sum_{j=0}^N K'(x_{i}-x_j)]dt \\\\\n&- \\sum_{i=0}^{N-1} \\int_0^T S_i\\varphi(t,x_{i})[c \\dot{x}_i + \\frac{f(c)}{N}\\sum_{j=0}^N K'(x_{i}-x_j)]dt.\n\\end{align*}\nBy performing a summation by parts, we get\n\\begin{align*}\n\\sum_{i=0}^{N-1} Z_i &= B.T._2 + \\sum_{i=1}^{N-1} \\int_0^T \\varphi(t,x_i) S_i\\left(R_i \\dot{x}_i +\\frac{f(R_i)}{N} \\sum_{j=0}^N K'(x_i -x_j) \\right)dt \\\\\n&\\quad - \\sum_{i=1}^{N-1} \\int_0^T \\varphi(t,x_i) S_{i-1}\\left(R_{i-1}\\dot{x}_{i-1} + \\frac{f(R_{i-1})}{N}\\sum_{j=0}^N K'(x_i-x_j) \\right)dt \\\\\n&\\quad +\\sum_{i=1}^{N-1} \\int_0^T \\varphi(t,x_i)(S_{i-1}- S_i)\\left(c\\dot{x}_i + \\frac{f(c)}{N} \\sum_{j=0}^N K'(x_i-x_j) \\right)dt \\\\\n&= B.T._2 + B.T._3 + \\sum_{i=1}^{N-2} (A_i^3 + A_i^4) + \\sum_{i=1}^{N-1}B_i.\n\\end{align*}\nwhere $B.T._2$ and $B.T._3$ regard the external particles. More precisely, $B.T._2= B.T._{21}+ B.T._{22}$, where\n\\begin{align*}\nB.T._{21} =& c \\int_0^T \\varphi(t,x_N) S_{N-1} \\frac{v(c)-v(R_{N-1})}{N} \\sum_{j=0}^N K'(X_N - x_j)dt \\\\\n& -c \\int_0^T \\varphi(t,x_0) S_{0}\\frac{v(c)-v(R_{0})}{N}\\sum_{j=0}^N K'(X_0 - x_j)dt, \\\\\nB.T._{22} =& \\int_0^T \\varphi(t,x_0)S_{0}R_0 \\left(\\dot{x}_0 + \\frac{v(R_0)}{N}\\sum_{j=0}^N K'(X_0 - x_j)\\right)dt \\\\\n&- \\int_0^T \\varphi(t,x_N)S_{N-1} R_{N-1} \\left(\\dot{x}_{N-1} + \\frac{v(R_{N-1})}{N}\\sum_{j=0}^N K'(X_N - x_j)\\right)dt,\n\\end{align*}\nand $B.T._3$ corresponds to\n\\begin{align*}\nB.T._3 =& \\int_0^T \\varphi(t,x_{N-1}) S_{N-1}\\left(R_{N-1}\\dot{x}_{N-1} + \\frac{f(R_{N-1})}{N} \\sum_{j=0}^N K'(x_{N-1}-x_j)\\right)dt \\\\\n&- \\int_0^T \\varphi(t,x_{0})S_0 \\left( R_{0}\\dot{x}_{0} + \\frac{f(R_{0})}{N} \\sum_{j=0}^N K'(x_{1}-x_j)\\right)dt.\n\\end{align*}\nThe terms $A_i^3,\\,A_i^4$ and $B_i$ regards, instead, the internal particles and they are defined as follows\n\\begin{align*}\nA_i^3 =& \\int_0^T \\varphi(t,x_i) S_i \\frac{f(R_i)}{N}\\sum_{j=0}^N [K'(x_i-x_j)-K'(x_{i+1}-x_j)]dt, \\\\\nA_i^4 =& \\int_0^T (\\varphi(t,x_i) - \\varphi(t,x_{i+1}))S_i R_i \\left( \\dot{x}_i + \\frac{v(R_i)}{N} \\sum_{j=0}^N K'(x_{i+1}-x_j) \\right)dt,\\\\\nB_i =& \\int_0^T \\varphi(t,x_i)(S_{i-1}- S_i))\\left(c\\dot{x}_i + \\frac{f(c)}{N} \\sum_{j=0}^N K'(x_i-x_j) \\right)dt.\n\\end{align*}\nSummarizing, we can rewrite $B.T._1 + \\sum_{i=0}^{N-1} (I_i + II_i)$ as\n\\[ B.T._1 + B.T_{21} + B.T._{22} + B.T._3 + \\sum_{i=0}^{N-1} (A^1_i + A^2_i) + \\sum_{i=1}^{N-2} (A^3_i + A^4_i) + \\sum_{i=1}^{N-1} B_i, \\]\nthen estimate \\eqref{entropiaN} follows if we prove that such sum is non negative when $N \\gg 1$, and this can be done by showing that\n\\begin{equation}\\label{partepositiva}\nB.T._1 + B.T._{21} + \\sum_{i=1}^{N-1}B_i > 0,\n\\end{equation}\nwhile\n\\begin{equation}\\label{ordine1\/N}\n\\left| B.T._{22} + B.T._3 + \\sum_{i=0}^{N-1}(A_i^1 + A_i^2) + \\sum_{i=1}^{N-2} (A_i^3 + A_i^4) \\right| \\leq \\frac{C}{N}\n\\end{equation}\nfor a positive constant $C= C(\\varphi,K,\\bar{\\rho},v,T)$.\nThe remaining part of the proof is devoted to show the validity of \\eqref{partepositiva} and \\eqref{ordine1\/N}.\nWe focus first on \\eqref{partepositiva}. Integrating by parts, recalling that $\\varphi(0,\\cdot)=\\varphi(T,\\cdot)=0$, $\\varphi(t,\\cdot) \\geq 0$ and the assumption (AK), we immediately obtain\n\\begin{equation}\\label{BT1}\nB.T._1 = -\\frac{f(c)}{N} \\int_0^T \\left(\\varphi(t,x_N)\\sum_{j=0}^N K'(x_N-x_j) + \\varphi(t,x_0)\\sum_{j=0}^N K'(x_0-x_j)\\right) > 0.\n\\end{equation}\nBecause of the monotonicity of $v$ (see (Av)), for all times $t$ we know that\n\\[ S_{0}(t)(v(c) - v(R_{0}(t))) \\geq 0,\\quad \\mbox{ and }\\quad S_{N-1}(t)(v(c) - v(R_{N-1}(t))) \\geq 0 \\]\nthus, recalling again (AK), we deduce\n\\begin{equation}\\label{BT21}\nB.T._{21} \\geq 0.\n\\end{equation}\nLet us now consider the generic term $B_i$. Substituting the expression of $\\dot{x}_i$, we get\n\\[ B_i = \\int_0^T\\varphi(t,x_i) (S_{i-1} - S_i)\\left[\\frac{v(c)-v(R_i)}{N} \\sum_{j>i} K'(x_i-x_j) + \\frac{v(c)-v(R_{i-1})}{N} \\sum_{j0$ if $x>0$, for all times holds\n\\[ (S_{i-1}- S_i)\\left[\\frac{v(c)-v(R_i)}{N} \\sum_{j>i} K'(x_i-x_j) + \\frac{v(c)-v(R_{i-1})}{N} \\sum_{j0$ such that\n\\[ L=\\sup\\left\\{ |K''(x)|\\,,\\,\\, x\\in[-(\\bar{x}_{max}-\\bar{x}_{min}),(\\bar{x}_{max}-\\bar{x}_{min})]\\right\\}. \\]\nSince the argument is quite technical, it is more convenient to split the left hand side of \\eqref{ordine1\/N} in three parts:\n\\[ \\Gamma_1 = B.T._{22} + B.T._3 + A_0^2 + A_{N-1}^2,\\quad \\Gamma_2 = \\sum_{i=0}^{N-1} A_i^1 + \\sum_{i=1}^{N-2} A_i^4,\\quad \\Gamma_3 = \\sum_{i=1}^{N-2} (A_i^2 + A_i^3). \\]\nRecalling that $K',\\,\\varphi$ and $v$ are uniformly bounded and Lipschitz, we get\n\\begin{align}\\label{Gamma1}\n\\notag|\\Gamma_1| \\leq\\, &4 L\\,\\|\\varphi\\|_{L^{\\infty}} \\|v\\|_{L^{\\infty}} \\int_0^T (R_0(x_1 - x_0) + R_{N-1}(x_{N}-x_{N-1}))dt \\\\\n&+ 2 L\\,\\|v\\|_{L^{\\infty}}Lip[\\varphi] \\int_0^T R_{N-1}(x_N - x_{N-1})dt \\leq \\frac{C(\\varphi,v,L,T)}{N}\n\\end{align}\nThen, inserting the expression of $\\dot{x}_i$, we can rearrange $\\Gamma_2$ in such a way that\n\\begin{align*}\n|\\Gamma_2| &\\leq 3\\sum_{i=0}^{N-1} \\int_0^T R_i \\left|\\hbox{\\ }\\Xint{\\hbox{\\vrule height -0pt width 10pt depth 1pt}}_{x_i}^{x_{i+1}}\\varphi(t,x) - \\varphi(t,x_{i+1})\\right|\\frac{|v(R_{i+1})-v(R_i)|}{N}\\sum_{j=0}^N |K'(x_{i+1}-x_j)| dt \\\\\n&\\quad + \\sum_{i=0}^{N-1} \\int_0^T R_i \\left|\\hbox{\\ }\\Xint{\\hbox{\\vrule height -0pt width 10pt depth 1pt}}_{x_i}^{x_{i+1}}\\varphi(t,x) - \\varphi(t,x_{i+1})\\right| \\frac{v(R_i)}{N} \\sum_{j=0}^N |K'(x_i-x_j)- K'(x_{i+1}-x_j)|dt \\\\\n&\\quad + \\sum_{i=1}^{N-2} \\int_0^T R_i |\\varphi(t,x_i) - \\varphi(t,x_{i+1})|\\frac{|v(R_{i-1})-v(R_i)|}{N}\\sum_{j=0}^N |K'(x_{i}-x_j)|dt \\\\\n&\\quad + \\sum_{i=1}^{N-2} \\int_0^T R_i |\\varphi(t,x_i) - \\varphi(t,x_{i+1})|\\frac{v(R_i)}{N} \\sum_{j=0}^N |K'(x_i-x_j)- K'(x_{i+1}-x_j)|dt\n\\end{align*}\nand using the Lipschitz and the uniform regularity of $K',\\,\\varphi,\\,v$, estimate \\eqref{TV} and the uniform bound on the support of $\\rho^N$, it is easy to see that\n\\begin{align}\\label{Gamma2}\n\\notag|\\Gamma_2| \\leq& \\,4L\\, Lip[\\varphi]Lip[v] TV[\\bar{\\rho}] \\int_0^T e^{Ct} \\sum_{i=0}^{N-1} R_i (x_{i+1}-x_i)dt \\\\\n& + 2 L\\,\\|v\\|_{L^{\\infty}} Lip[\\varphi] \\int_0^T \\sum_{i=0}^{N-1} R_i (x_{i+1}-x_i)^2 dt \\leq \\frac{C(\\varphi,v,K,\\bar{\\rho},T)}{N}.\n\\end{align}\nIt remains to show that also $\\Gamma_3$ vanishes as $N \\to \\infty$. In this case, the uniform bound on $K''$ implies\n\\begin{align}\\label{Gamma3}\n\\notag |\\Gamma_3| &\\leq \\sum_{i=1}^{N-2} \\int_0^T \\frac{|f(R_i)|}{N} \\int_{x_i}^{x_{i+1}}|\\varphi(t,x)-\\varphi(t,x_i)| \\sum_{j=0}^N |K''(x-x_j)|dxdt \\\\\n&\\leq L\\,\\|v\\|_{L^{\\infty}} Lip[\\varphi] \\int_0^T R_i \\int_{x_i}^{x_{i+1}} (x-x_i)dxdt \\leq \\frac{C(\\varphi,v,K,\\bar{\\rho},T)}{N}.\n\\end{align}\nFinally, by putting together \\eqref{Gamma1},\\eqref{Gamma2} and \\eqref{Gamma3}, we obtain \\eqref{ordine1\/N} and, recalling also \\eqref{partepositiva}, \\eqref{entropiaN}.\n\\end{proof}\n\nWe are now in position to prove that the large particle limit $\\rho$ that we obtained in the previous section is an entropy solution for the PDE.\n\n\\begin{lemma}\\label{entropiarho}\nLet $\\rho$ be the limit of $\\rho^N$ up to a subsequence. For every non negative $\\varphi \\in C^{\\infty}_c ([0,+\\infty)\\times \\mathbb R)$ and $c\\geq 0$, one has\n\\begin{equation}\\label{entropia}\n0 \\leq \\int_\\mathbb R |\\bar\\rho - c|\\varphi(0,x)dx + \\int_0^{+\\infty} \\int_{\\mathbb R} |\\rho - c| \\varphi_t - \\mathrm{sign}(\\rho -c)[(f(\\rho) - f(c))K' \\ast \\rho\\, \\varphi_x - f(c)K'' \\ast \\rho\\, \\varphi] dx dt.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nLet $T>0$ be such that $\\mathrm{supp}(\\varphi)\\subset [0,T)$. Roughly speaking, the statement holds provided we can show that it is possible to pass to the limit as $N \\to \\infty$ in the inequality~\\eqref{entropiaN}. More precisely,\nwe need to prove the following\n\\begin{align*}\n& \\lim_{N \\to \\infty} \\int_\\mathbb R |\\rho^N(0,x) - c|\\varphi(0,x)dx = \\int_\\mathbb R |\\bar\\rho - c|\\varphi(0,x)dx\\\\\n& \\lim_{N \\to \\infty} \\int_0^T \\int_{\\mathbb R} |\\rho^N-c|\\,\\varphi_t\\, dx dt = \\int_0^T \\int_{\\mathbb R} |\\rho-c|\\,\\varphi_t \\, dx dt \\\\\n& \\lim_{N \\to \\infty} \\int_0^T \\int_{\\mathbb R} \\mathrm{sign}(\\rho^N-c)(f(\\rho^N)-f(c))K'\\ast \\hat{\\rho}^N\\,\\varphi_x \\, dx dt\\\\\n & \\qquad = \\int_0^T \\int_{\\mathbb R} \\mathrm{sign}(\\rho-c)(f(\\rho)-f(c))K'\\ast \\rho \\,\\varphi_x \\, dx dt\\\\\n& \\lim_{N \\to \\infty} \\int_0^T \\int_{\\mathbb R} f(c) \\mathrm{sign}(\\rho^N - c)K'' \\ast \\hat{\\rho}^N\\,\\varphi \\, dx dt = \\int_0^T \\int_{\\mathbb R} f(c) \\mathrm{sign}(\\rho - c)K'' \\ast \\rho\\,\\varphi\\, dx dt\n\\end{align*}\nThe first two limits are immediate in view of the strong $L^1$-convergence of $\\rho^N(0,x)$ to $\\bar\\rho$ and to the convergence of $\\rho^N$ to $\\rho$ almost everywhere in $L^1([0,T] \\times \\mathbb R)$ respectively.\nNotice now that the continuity of $f$ ensures the continuity of the function $h(\\mu):= \\mathrm{sign}(\\mu - c)(f(\\mu)-f(c))$. We have\n\\begin{align*}\n\\int_0^T \\int_{\\mathbb R}& [\\mathrm{sign}(\\rho^N-c)(f(\\rho^N)-f(c))K'\\ast \\hat{\\rho}^N - \\mathrm{sign}(\\rho-c)(f(\\rho)-f(c))K'\\ast \\rho ]\\,\\varphi_x\\,dx dt\\\\\n&= \\int_0^T \\int_{\\mathbb R}(h(\\rho)-h(\\rho^N))K'\\ast \\rho\\, \\varphi_x\\,dx dt + \\int_0^T \\int_{\\mathbb R} h(\\rho^N) K'\\ast(\\rho -\\hat{\\rho}^N)\\,\\varphi_x\\,dx dt,\\\\\n\\end{align*}\nthen the regularity of $h$ and $K'$ required in the assumptions (Av) and (AK), the convergence of $\\rho^N$ to $\\rho$ almost everywhere in $[0,T] \\times \\mathbb R$ and the strong $L^1$-convergence of $K' \\ast \\hat{\\rho}^N$ to $K' \\ast \\rho$ established in Remark \\ref{rem:empirical} imply that\n\\begin{equation}\\label{bla2}\n\\int_0^T \\int_{\\mathbb R}|[(h(\\rho)-h(\\rho^N))K'\\ast \\rho + h(\\rho^N) K'\\ast(\\rho -\\hat{\\rho}^N)]\\,\\varphi_x|\\,dx dt \\to 0\n\\end{equation}\nas $N$ tends to $+\\infty$. Concerning the fourth limit, instead, we can see that\n\\begin{align*}\n\\int_0^T \\int_{\\mathbb R}& f(c)[\\mathrm{sign}(\\rho^N-c)K''\\ast \\hat{\\rho}^N - \\mathrm{sign}(\\rho-c)K''\\ast\\rho]\\,\\varphi\\,dx dt \\\\\n&= \\int_0^T \\int_{\\mathbb R} f(c)\\mathrm{sign}(\\rho^N-c)K''\\ast (\\hat{\\rho}^N - \\rho)\\,\\varphi\\,dx dt \\\\\n&\\quad + \\int_0^T \\int_{\\mathbb R} f(c)(\\mathrm{sign}(\\rho^N-c) -\\mathrm{sign}(\\rho-c))K''\\ast \\rho\\,\\varphi\\,dx dt.\n\\end{align*}\nThe first of the two terms can be handled as above. By using Remark \\ref{rem:empirical} and Lemma \\ref{lem:empirical1}, we get \n\\begin{equation}\\label{bla3}\n\\int_0^T \\int_{\\mathbb R} |f(c)\\mathrm{sign}(\\rho^N-c)K''\\ast (\\hat{\\rho}^N - \\rho)\\,\\varphi\\,dx| dt \\leq \\frac{C(K'',\\|f\\|_{\\infty},\\varphi)}{N}\\,.\n\\end{equation}\nOn the other hand, passing to the limit in the terms including the difference $\\mathrm{sign}(\\rho^N-c)- \\mathrm{sign}(\\rho-c)$ is less straightforward because of the discontinuity of the sign function.\nLet us then focus on the proof of\n\\[ \\lim_{N \\to \\infty} \\int_0^T \\int_{\\mathbb R} f(c)(\\mathrm{sign}(\\rho^N-c)-\\mathrm{sign}(\\rho-c))K''\\ast \\rho\\,\\varphi\\,dxdt = 0\\,. \\]\nIn order to get rid of the discontinuity, we need to introduce two smooth approximations of the \\emph{sign} function, we call them $\\eta_{\\delta}^{\\pm}$, so that\n\\[ \\mathrm{sign}(z) - \\eta_{\\delta}^+(z) \\geq 0 \\quad \\mbox{ and } \\quad \\mathrm{sign}(z)-\\eta_{\\delta}^-(z) \\leq 0. \\]\nLet us recall that the regularity of $K$ ensures the existence of a constant $L>0$ such that $|K''(z)| \\leq L$ for every $z \\in [-2\\mathrm{meas}(\\mathrm{supp}(\\rho^N)), 2\\mathrm{meas}(\\mathrm{supp}(\\rho^N))]$ and every $N$.\nThen we can estimate\n\\begin{align*}\n\\int_0^T\\int_{\\mathbb R}f(c)\\mathrm{sign}&(\\rho^N-c) K''\\ast \\rho \\varphi \\\\\n&= \\int_0^T\\int_{\\mathbb R}f(c)\\mathrm{sign}(\\rho^N-c)(K''-L)\\ast\\rho\\,\\varphi + \\int_0^T\\int_{\\mathbb R}f(c)\\mathrm{sign}(\\rho^N-c)L\\ast \\rho\\,\\varphi \\\\\n&\\leq \\int_0^T\\int_{\\mathbb R}f(c)\\eta_{\\delta}^+(\\rho^N-c)(K''-L)\\ast \\rho\\,\\varphi + \\int_0^T\\int_{\\mathbb R}f(c)\\eta_{\\delta}^-(\\rho^N-c)L\\ast \\rho\\,\\varphi\\,.\n\\end{align*}\nwhere the inequality holds because\n\\begin{align*}\n&(\\mathrm{sign}(\\rho^N-c)-\\eta_{\\delta}^+(\\rho^N-c))(K''-L)\\ast \\rho \\leq 0 \\\\\n&(\\mathrm{sign}(\\rho^N-c)-\\eta_{\\delta}^-(\\rho^N-c)) L\\ast \\rho \\leq 0.\n\\end{align*}\nNow, observe that\n\\begin{align*}\n\\lim_{N\\to \\infty}& f(c) \\int_0^T \\int_{\\mathbb R} (\\eta_{\\delta}^+(\\rho^N -c) - \\eta_{\\delta}^+(\\rho -c)) (K''-L)\\ast \\rho\\,\\varphi\\\\\n&\\leq \\lim_{N\\to \\infty} f(c)\\int_0^T\\int_{\\mathbb R} |\\eta_{\\delta}^+(\\rho^N -c) -\\eta_{\\delta}^+(\\rho -c)| |(K''-L)\\ast\\rho\\,\\varphi| \\\\\n&\\leq \\lim_{N\\to \\infty} f(c) 2L \\| \\varphi\\|_{\\infty} Lip[\\eta_{\\delta}^+] \\int_0^T\\int_{\\mathbb R} |\\rho^N-\\rho| \\\\\n&\\leq C(L, \\varphi, \\eta_{\\delta}^+) \\lim_{N\\to \\infty} \\| \\rho^N - \\rho\\|_{L^1([0,T]\\times \\mathbb R)}= 0\n\\end{align*}\nand in a similar way we get also\n\\[ \\lim_{N \\to \\infty} \\int_0^T\\int_{\\mathbb R} f(c)(\\eta_{\\delta}^-(\\rho^N-c) - \\eta_{\\delta}^-(\\rho-c))L\\ast \\rho\\,\\varphi = 0\\,, \\]\nthus implying that\n\\begin{equation*}\n\\limsup_{N\\to\\infty} \\int_0^T\\int_{\\mathbb R}f(c)\\mathrm{sign}(\\rho^N-c) K''\\ast \\rho\\,\\varphi \\leq \\int_0^T\\int_{\\mathbb R} f(c)[\\eta_{\\delta}^+ (\\rho-c)(K''-L)\\ast\\rho + \\eta_{\\delta}^-(\\rho-c)L\\ast \\rho]\\,\\varphi\n\\end{equation*}\nOnce here, the dominated convergence Theorem ensures that we can pass to the limit in $\\delta$ to get\n\\[\\limsup_{N \\to \\infty} f(c)\\int_0^T\\int_{\\mathbb R} \\mathrm{sign}(\\rho^N-c) K'' \\ast \\rho\\,\\varphi \\leq \\int_0^T\\int_{\\mathbb R} \\mathrm{sign}(\\rho-c) K'' \\ast \\rho\\,\\varphi.\\]\nA symmetric argument provides the inverse inequality with $\\liminf$ replacing $\\limsup$, hence we obtain\n\\begin{equation}\\label{limiteinN}\n\\lim_{N \\to \\infty} f(c)\\int_0^T\\int_{\\mathbb R} (\\mathrm{sign}(\\rho^N-c)-\\mathrm{sign}(\\rho-c)) K'' \\ast \\rho\\,\\varphi = 0.\n\\end{equation}\nThe above argument, together with~\\eqref{bla2}-\\eqref{limiteinN}, implies estimate~\\eqref{entropia}, and the proof is complete.\n\\end{proof}\n\nWe now tackle another crucial task for our result, namely the \\emph{uniqueness of the entropy solution} for a fixed initial datum. To perform this task we rely on a stability result due to Karlsen and Risebro \\cite{KarlsenRiebro}, that we report here for sake of completeness in an adapted version.\n\n\\begin{theorem}\\label{KarlsenRiebro}\nLet $f,P,Q$ be such that\n\\[ f \\quad \\mbox{ is locally Lipschitz, }\\qquad P,Q \\in W^{1,1}(\\mathbb R) \\cap \\mathcal{C}(\\mathbb R), \\qquad P_x,Q_x \\in L^{\\infty}(\\mathbb R), \\]\nand let $p,q \\in L^\\infty([0,T]; BV(\\mathbb R))$ be respectively entropy solutions to\n\\[ \\left\\lbrace \\begin{array}{ll}\n&p_t = (f(p)P(x))_x \\quad p(0,x)=p_0(x),\\\\\n&q_t = (f(q)Q(x))_x \\quad q(0,x)=q_0(x),\n\\end{array}\\right.\\]\nwhere the initial data $(p_0,q_0)$ are in $L^1(\\mathbb R) \\cap L^{\\infty}(\\mathbb R) \\cap BV(\\mathbb R)$.\nThen for almost every $t \\in (0,T)$ one has\n\\begin{equation}\\label{contrattivitasol}\n\\| p(t) - q(t) \\|_{L^1(\\mathbb R)} \\leq \\| p_0 - q_0 \\|_{L^1(\\mathbb R)} + t(C_1 \\| P-Q\\|_{L^{\\infty}(\\mathbb R)} + C_2\\| P-Q\\|_{BV(\\mathbb R)})\n\\end{equation}\nwhere $C_1 = Lip[f] \\min\\{ \\|P\\|_{BV(\\mathbb R)}, \\|Q \\|_{BV(\\mathbb R)}\\}$ and $C_2 = \\| f\\|_{L^{\\infty}}$.\n\\end{theorem}\n\nWe are now ready to prove our main theorem.\n\n\\proofof{Theorem~\\ref{main}}\nThe results in Theorem \\ref{convergence} and Lemma \\ref{entropiarho} imply that there exist a subsequence of $\\rho^N$ converging almost everywhere on $[0,+\\infty)\\times \\mathbb R$ and in $L^1_{loc}$ to an entropy solution $\\rho$ to \\eqref{CauchyProblem} in the sense of Definition \\ref{solentropicadef}. Therefore, the proof of Theorem~\\ref{main} is concluded once we show that $\\rho$ is the unique entropy solution. We argue by contradiction. Assume that there exist two different functions $\\rho$ and $\\varrho$ satisfying Definition~\\ref{solentropicadef} with $\\rho(0,\\cdot) = \\varrho(0,\\cdot) = \\bar{\\rho}$, then we can define two vector fields $P(x) = K' \\ast \\rho (x)$ and $Q(x)= K'\\ast \\varrho(x)$.\nIn order to apply Theorem~\\ref{KarlsenRiebro} to $P$ and $Q$, let us check that all assumptions therein are satisfied.\nFirst of all, $P$ and $Q$ are locally Lipschitz in $\\mathbb R$ thanks to the assumption (AK), thus $P_x, Q_x \\in L^{\\infty}_{loc}(\\mathbb R)$. Then, we observe that\n\\begin{align*}\n|P(t,x) - Q(t,x)| &= \\left| \\int_{\\mathbb R} K'(x-y)\\rho(t,y)dy - \\int_{\\mathbb R} K'(x-y)\\varrho(t,y)dy \\right| \\\\\n&\\leq \\int_{\\mathbb R} |K'(x-y)(\\rho(t,y) - \\varrho(t,y))|dy \\leq L_{\\bar{\\rho}} \\| \\rho - \\varrho\\|_{L^{\\infty}([0,T];L^1(\\mathbb R))},\n\\end{align*}\nand\n\\begin{align*}\n\\int_{\\mathbb R} |P_x(s,x) - Q_x(s,x)|dx &= \\int_{\\mathbb R} |K'' \\ast \\rho(t,x) - K'' \\ast \\varrho(t,x)| dx \\\\\n&= \\int_{\\mathbb R} |K''\\ast(\\rho - \\varrho)(t,x)|dx \\leq L_{\\bar{\\rho}} \\|\\rho - \\varrho\\|_{L^{\\infty}([0,T];L^1(\\mathbb R))},\n\\end{align*}\nwhere $L_{\\bar\\rho}=\\max\\{\\|K'\\|_{L^\\infty(I_{\\bar\\rho})},\\|K''\\|_{L^1(I_{\\bar\\rho})}\\}$, and $I_{\\bar\\rho}=[-2\\mathrm{meas}(\\mathrm{supp}(\\bar{\\rho})), 2\\mathrm{meas}(\\mathrm{supp}(\\bar{\\rho}))]$.\nAs a consequence\n\\begin{align*}\n& \\| P-Q\\|_{L^{\\infty}([0,T] \\times \\mathbb R)} \\leq L_{\\bar{\\rho}} \\| \\rho - \\varrho\\|_{L^{\\infty}([0,T];L^1(\\mathbb R))}\\\\\n & \\| P-Q\\|_{L^{\\infty}([0,T] ; BV(\\mathbb R))} \\leq L_{\\bar{\\rho}} \\| \\rho - \\varrho\\|_{L^{\\infty}(0,T;L^1(\\mathbb R))}.\n\\end{align*}\nBy applying Theorem~\\ref{KarlsenRiebro} to $\\rho,\\varrho, P$ and $Q$ we obtain\n\\begin{equation}\\label{assurdo}\n\\| \\rho(t) - \\varrho(t) \\|_{L^1(\\mathbb R)} \\leq C(K,\\bar{\\rho}) t \\| \\rho(t) - \\varrho(t) \\|_{L^1(\\mathbb R)}.\n\\end{equation}\nAssume that there exists an open interval $(t_1,t_2)\\subset [0,T]$ such that $\\rho(t,\\cdot)$ and $\\varrho(t,\\cdot)$ differ in $L^1(\\mathbb R)$ on $t\\in (t_1,t_2)$. Then, due to the fact that \\eqref{eq:intro_PDE} is invariant with respect to time-translations, the inequality ~\\eqref{assurdo} implies\n\\begin{equation}\\label{assurdovero}\n\\| \\rho(t,\\cdot) - \\varrho(t,\\cdot) \\|_{L^1(\\mathbb R)} \\leq C(K,\\bar{\\rho}) (t-t_1) \\| \\rho(t,\\cdot) - \\varrho(t,\\cdot) \\|_{L^1(\\mathbb R)}\\quad \\forall\\,t \\in (t_1,t_2).\n\\end{equation}\nClearly can always consider $t\\in (t_1,t_2)$ small enough such that $C(K,\\bar{\\rho}) (t-t_1)< 1$, but this is in contradiction with~\\eqref{assurdovero}. In conclusion, $\\rho(t,\\cdot)\\equiv \\varrho(t,\\cdot)$ on $[0,T]$ and the proof is complete.\n\\end{proof}\n\n\\section{Non-uniqueness of weak solutions and steady states}\\label{sec:discussion}\n\nThe use of the notion of entropy solution in the present context is not merely motivated by the technical need of identifying a notion of solution (stronger than weak solutions) allowing to prove uniqueness. Similarly to what happens for scalar conservation laws, we prove that there are explicit examples of initial data in $BV$ for which there exists two weak solutions to the Cauchy problem \\eqref{CauchyProblem}. \n\nFor simplicity, we use\n\\[v(\\rho)=(1-\\rho)_{+}.\\]\nConsider the initial condition\n\\[\\bar\\rho(x)=\\mathbf{1}_{[-1,-1\/2]}+\\mathbf{1}_{[1\/2,1]}.\\]\nClearly, the stationary function\n\\[\\rho_s(t,x)=\\mathbf{1}_{[-1,-1\/2]}+\\mathbf{1}_{[1\/2,1]}\\]\nis a weak solution to \\eqref{CauchyProblem} with initial condition $\\bar\\rho$. To see this, let $\\varphi\\in C^1_c([0,+\\infty)\\times \\mathbb R)$. We have\n\\begin{align*}\n & \\int_0^{+\\infty}\\int_\\mathbb R\\left[\\rho_s\\varphi_t + \\rho_s v(\\rho_s)K'\\ast \\rho\\varphi_x\\right] dx dt + \\int_\\mathbb R\\bar\\rho\\varphi(0,x) dx \\\\\n & \\ = \\int_0^{+\\infty}\\frac{d}{dt}\\left(\\int_{[-1,-1\/2]\\cup [1\/2,1]}\\varphi dx\\right)dt +\\int_{[-1,-1\/2]\\cup [1\/2,1]}\\varphi(0,x) dx = 0.\n\\end{align*}\n\nWe now prove that $\\rho_s$ is not an entropy solution, in that it does not satisfy the entropy condition in Definition \\ref{solentropicadef}. Let $\\psi\\in C^\\infty_c(\\mathbb R)$ be a standard non-negative mollifier supported on $[-1\/4,1\/4]$. Let $T>0$ and consider the test function $\\varphi(t,x)=\\phi(x)\\xi(t)$ with\n\\[\\phi(x)=\n\\begin{cases}\n\\psi(x+1\/2) & \\hbox{if $-3\/4\\leq x\\leq -1\/4$} \\\\\n\\psi(x-1\/2) & \\hbox{if $1\/4\\leq x\\leq 3\/4$} \\\\\n0 & \\hbox{otherwise},\n\\end{cases}\n\\]\nand $\\xi\\in C^\\infty([0,+\\infty))$ with $\\xi(t)=1$ for $t\\leq T$, $\\xi(t)=0$ for $t\\geq T+1$ and $\\xi$ non-increasing. Let us set $c=1\/2$, $I=[1\/4,3\/4]$, and compute\n\\begin{align*}\n & \\int_\\mathbb R |\\rho_s -c|\\phi dx +\\int_0^{+\\infty}\\int_\\mathbb R\\left[|\\rho_s-c|\\phi(x)\\xi'(t) -\\mathrm{sign}(\\rho_s-c)(f(\\rho)-f(c))K'\\ast\\rho_s\\phi'(x)\\xi(t) \\right.\\\\\n & \\qquad \\left.-f(c)K''\\ast \\rho_s \\phi(x)\\xi(t)\\right]\\,dxdt\\\\\n & \\leq 2\\int_I \\varphi dx + \\frac{1}{4}\\int_0^{T+1}\\xi(t)dt\\left[\\int_{(-I)\\cap(-\\infty,1\/2]}K'\\ast\\rho_s \\varphi_x dx - \\int_{(-I)\\cap[1\/2,+\\infty)}K'\\ast\\rho_s \\varphi_x dx \\right.\\\\\n & \\quad \\left.- \\int_{I\\cap(-\\infty,1\/2]}K'\\ast\\rho_s \\varphi_x dx +\\int_{I\\cap[1\/2,+\\infty)}K'\\ast\\rho_s \\varphi_x dx- \\int_{(-I)\\cup I}K''\\ast \\rho_s \\varphi dx\\right]\\\\\n & = 2\\int_I \\varphi dx - \\frac{1}{4}\\int_0^{T+1}\\xi(t)dt\\left[\\int_{(-I)\\cap(-\\infty,1\/2]}K''\\ast\\rho_s \\varphi dx - \\int_{(-I)\\cap[1\/2,+\\infty)}K''\\ast\\rho_s \\varphi dx \\right.\\\\\n & \\quad \\left.- \\int_{I\\cap(-\\infty,1\/2]}K''\\ast\\rho_s \\varphi dx +\\int_{I\\cap[1\/2,+\\infty)}K''\\ast\\rho_s \\varphi dx+\\int_{(-I)\\cup I}K''\\ast \\rho_s \\varphi dx\\right].\n\\end{align*}\nNow, since $K''$ and $\\rho_s$ are even, the same holds for $K''\\ast \\rho_s$. Therefore we get\n\\begin{align}\n & \\int_\\mathbb R |\\rho_s -c|\\varphi dx +\\int_0^T\\int_\\mathbb R\\left[|\\rho_s-c|\\varphi_t -\\mathrm{sign}(\\rho_s-c)(f(\\rho)-f(c))K'\\ast\\rho_s\\varphi_x -f(c)K''\\ast \\rho_s \\varphi\\right]\\,dxdt\\nonumber\\\\\n & \\ \\leq 2\\int_I \\varphi dx - \\frac{1}{2}\\int_0^{T+1}\\xi(t)dt\\iint_{I\\times I} \\left(K''(x-y)+K''(x+y)\\right) \\varphi(x)dy dx.\\label{eq:nonunique1}\n\\end{align}\nLet us now require for simplicity the following additional assumption:\n\\begin{equation}\\label{eq:nonunique_assumption}\n K''(x)>0\\qquad \\hbox{for all $x\\in \\mathbb R$}.\n\\end{equation}\nActually, such assumption can be relaxed, see remark \\ref{relaxed} below. Then, the last integral in \\eqref{eq:nonunique1} is clearly positive, and recalling that $\\xi(t)=1$ on $t\\in[0,T]$, we can choose $T$ large enough so that the whole right-hand side of \\eqref{eq:nonunique1} is strictly negative, thus contradicting Definition \\ref{solentropicadef}.\n\nThe above argument shows that $\\rho_s$ is a weak solution but not an entropy solution. On the other hand, the initial condition $\\rho_s$ is $L^\\infty$ and $BV$, therefore it must generate an entropy solution according to our main Theorem \\ref{main}. Clearly, such solution cannot coincide with $\\rho_s$. We have therefore proven the following theorem.\n\n\\begin{theorem}\\label{thm:nonuniqueness}\n Assume (Av), (AK), and \\eqref{eq:nonunique_assumption} are satisfied. Then, there exists an initial condition $\\bar\\rho \\in L^\\infty(\\mathbb R)\\cap BV(\\mathbb R)$ such that the Cauchy problem \\eqref{CauchyProblem} has more than one distributional weak solution.\n\\end{theorem}\n\n\\begin{remark}\\label{relaxed}\n\\emph{The assumption \\eqref{eq:nonunique_assumption} can be relaxed to include also Gaussian kernels $K(x)=- A e^{-B x^2}$ with $A,B>0$. Indeed, in order to fulfil \n\\[\\iint_{I\\times I} \\left(K''(x-y)+K''(x+y)\\right) \\varphi(x)dy dx>0\\]\none has to choose the size of the interval $I$ small enough. We omit the details.}\n\\end{remark}\n\n\\begin{remark}\n\\emph{The fact that the initial condition $\\rho_s$ will not give rise to a stationary solution can also be seen intuitively by using the result in Theorem \\ref{main}. Let us approximate $\\bar\\rho$ with $2(N+1)$ particles with mass $1\/(2(N+1))$, with $N$ integer, and with the particles located at $\\bar{x}_i$, $i=1,\\ldots,2(N+1)$, with\n\\begin{align*}\n & \\bar{x}_i=-1+\\frac{i}{2(N+1)}\\,,\\qquad i=0,\\ldots,N\\\\\n & \\bar{x}_i=1\/2+\\frac{i-N}{2(N+1)}\\,,\\qquad i=N+1,\\ldots,2N+1.\n\\end{align*}\nLet now evolve the particles' positions with the usual ODE system\n\\[\\dot{x}_i=-\\frac{v(R_i)}{N}\\sum_{j>i}(x_i-x_j) -\\frac{v(R_{i-1})}{N}\\sum_{jN}K'(x_j(0)-x_N(0)) \\geq v(1\/N)\\frac{N+1}{N}K'(2)>v(1\/2)K'(2)>0.\\]\nSimilarly, one can show that all particles $i=0,\\ldots,N-1$ `move' from their initial position, although their initial speed is zero. A numerical simulation performed in Section \\ref{sec:numerics} actually show that for large $N$ the discrete density tends to form a unique bump for large times. Hence, since Theorem \\ref{main} shows that the particle solution is arbitrarily close in $L^1_{loc}$ to the entropy solution, this argument supports the evidence that the entropy solution is not stationary. }\n\\end{remark}\n\nApart from producing an explicit example of non-uniqueness of weak solutions, the above example shows that there are stationary weak solutions that are not entropy solutions, and therefore cannot be considered as stationary solutions to our problem according to Definition \\ref{solentropicadef}. This raises the following natural question: what are the steady states of \\eqref{eq:intro_PDE} in the entropy sense? Before asking this question, it will be useful to tackle another task: as the approximating particle system converges to the entropy solutions, detecting the \\emph{steady states of \\eqref{Odes}} will give us a useful insight about the steady states at the continuum level.\n\nLet us restrict, for simplicity, to the case of an even initial condition $\\bar{\\rho}$, such that $\\|\\bar{\\rho}\\|_{L^1}=1$ and $N\\in\\mathbb N$ fixed. We assume here that $K'$ is supported on the whole $\\mathbb R$. Consider the following particle configuration,\n\\begin{equation}\\label{stable_conf}\n\\begin{cases}\n\\tilde{x}_1=-\\frac{1}{2}+\\frac{1}{2N},\\\\\n\\tilde{x}_{i+1}=\\tilde{x}_1+\\frac{i}{N},\\quad i=1,...,N-2,\\\\\n\\tilde{x}_N=\\tilde{x}_1+\\frac{N-1}{N}=\\frac{1}{2}-\\frac{1}{2N}.\n\\end{cases}\n\\end{equation}\nWith this choice we get\n\\[R_i = \\frac{1}{N(\\tilde{x}_{i+1}-\\tilde{x}_i)}=1, \\quad v(R_i)=0, \\quad \\forall i=1,...,N-1,\n\\]\nand it is easy to show that this configuration is a stationary solution for system \\eqref{Odes}. Actually, up to space translations, this is the \\emph{only} possible stationary solution. In order to prove that, assume that we have a particle configuration as in \\eqref{stable_conf} but with only one particle labelled $I$ such that\n\\[\n \\tilde{x}_I=\\tilde{x}_{1}+\\frac{I-1}{N}, \\quad \\tilde{x}_{I+1}=\\bar{x}>\\tilde{x}_{I}+\\frac{1}{N}.\n\\]\nFor such a configuration\n\\[\nR_I = \\frac{m}{N(\\tilde{x}_{I+1}-\\tilde{x}_I)}<1, \\quad v(R_I)>0,\\mbox{ and } \\quad v(R_i)=0 \\quad\\forall i\\neq I,\n\\]\nand the $I$ particles evolves according to\n\\begin{align*}\n & \\dot{\\tilde{x}}_I = -\\frac{v(R_I)}{N}\\sum_{j>I}K'(\\tilde{x}_I-\\tilde{x}_j) =-\\frac{v(R_I)}{N}\\sum_{j>I}K'\\left(\\frac{1}{N}(I-j)\\right)>0,\n\\end{align*}\nand then $\\tilde{x}_I$ moves with positive velocity.\n\nWe observe that, as $N\\to\\infty$, the piecewise constant density reconstructed by configuration \\eqref{stable_conf} converges in $L^1$ to the step function\n\\[\n\\rho_S=\\mathbf{1}_{[-\\frac{1}{2},\\frac{1}{2}]}.\n\\]\nThe above discussion suggests that all initial data with multiple bumps only attaining the values $0$ and $1$ are (weak solutions but) not entropy solutions except $\\rho_S$. Actually, this statement can be proven exactly in the same way as we proved Theorem \\ref{thm:nonuniqueness}, as it is clear that the position of the decreasing discontinuity at $x=-1\/2$ and of the increasing discontinuity at $x=1\/2$ do not play an essential role. By choosing the test function $\\varphi$ suitably, one can easily show that the entropy condition can be contradicted by suitably centring $\\varphi$ around the non-admissible discontinuities. We omit the details. As a consequence, we can assert that $\\rho_S$ is the \\emph{only} stationary solution to \\eqref{eq:intro_PDE} in the sense of Definition \\ref{solentropicadef}.\n\n\\section{Numerical simulations}\\label{sec:numerics}\nThe last section of the paper is devoted to present some numerical experiments based on the particle methods presented in the paper, supporting the results in the previous sections. The qualitative property that emerges more clearly in the simulations below is that solutions tend to aggregate and narrow their support. However, the maximal density constraint avoids the blow-up, and the density profile tends for large times towards the non-trivial stationary pattern presented at the end of the previous section. We compare our particle method with a classical Godunov method for \\eqref{eq:intro_continuum}.\n\n\\subsection*{Particle simulations}\n\nWe first test the particle method introduced in Section \\ref{sec:2}. We proceed as follows: we set the number of particles as $N$ and we reconstruct the initial distribution according to \\eqref{eq:dscr_IC} (for step functions we simply set the particles initially at distance $\\frac{\\ell}{N}$ from each other where $\\ell$ is the length of the support). Once we have defined the initial distribution, we solve the system \\eqref{Odes} with a MATLAB solver and we reconstruct the discrete density as\n\\begin{equation}\\label{eq:central}\n R_i(t)=\\frac{m}{2N(x_{i+1}(t)-x_{i-1}(t))}, \\quad i=2,N-1.\n\\end{equation}\n\nThe choice of central differences does not effect the particle evolution, since in solving system \\eqref{Odes} we define $R_i$ with forward differences. The choice in\\eqref{eq:central} is only motivated by the symmetry of the patterns we expect to achieve for large times.\n\\begin{remark}\\label{rem_zero_den}\n\\emph{In the construction of the discrete densities we get the problem of giving density to the first and the last particles (or only to the last one if we use forward differences). Among all the possible choices we set at zero this two densities, namely\n\\[\n R_1(t)=R_N(t)=0.\n\\]\nThis is a natural choice if we are dealing with step functions but it is not suitable with more general initial conditions, see \\figurename~\\ref{fig:cup_IF}.}\n\\end{remark}\n\nIn all the simulations we set\n\\[\n v(\\rho)=1-\\rho, \\quad K(x)=\\frac{C}{\\sqrt{2\\pi}}e^{-\\frac{x^2}{2}} \\mbox{ and } N=300.\n\\]\nIn the particles evolution we don't fix any time step that is automatically determined by the solver.\n\nThe first example we furnish is the case of a single step function with symmetric support,\n\\begin{equation}\\label{eq:sing_step}\n \t\\bar{\\rho}(x) = 0.3 \\quad x\\in\\left[-1,1\\right].\n\\end{equation}\nFor this initial condition $m=0.6$, so the final configuration will be a step function of value $\\rho=1$ supported in $\\left[-0.3,0.3\\right]$. In \\figurename~\\ref{fig:1s_IF} we plot initial (left) and final (right) configurations, while in \\figurename~\\ref{fig:1s_Ev} evolution in time is plotted.\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{minipage}[c]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{PLNL1S03_300_1}\n\\end{minipage}\n\\hspace{3mm}\n\\begin{minipage}[c]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{PLNL1S03_300_601}\n\\end{minipage}\n\\end{center}\n\\caption{On the left: initial condition as in \\eqref{eq:sing_step}; on the right: the final stationary configuration. We plot the discrete density in (red)-continuous line and the particles positions in (blue)-circles on the bottom of the picture.}\n\\label{fig:1s_IF}\n\\end{figure}\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{minipage}[c]{.70\\textwidth}\n\\includegraphics[width=.95\\textwidth]{Time-evolution_1s}\n\\end{minipage}\n\\end{center}\n\\caption{Evolution of the discrete density for the initial configuration \\eqref{eq:sing_step}.}\n\\label{fig:1s_Ev}\n\\end{figure}\n\nNext we show the evolution corresponding to the following initial condition,\n\\begin{equation}\\label{eq:id_parabola}\n\\bar{\\rho}(x) = \\frac{3}{4}(1-x^2), \\quad x\\in\\left[-1,1\\right].\n\\end{equation}\nEven in this case the function is symmetric with respect to the origin so it will converge to the unitary step function supported in $\\left[-0.5,0.5\\right]$ since $\\bar{\\rho}$ has normalized mass. As in the previous example initial and final configurations and time evolution of the solution are plotted in \\figurename~\\ref{fig:cup_IF}.\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{minipage}[c]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{PLNL_CUP_300_1}\n\\end{minipage}\n\\hspace{3mm}\n\\begin{minipage}[c]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{PLNL_CUP_300_601}\n\\end{minipage}\n\\hspace{3mm}\n\\begin{minipage}[c]{.70\\textwidth}\n\\vspace{3mm}\n\\includegraphics[width=.95\\textwidth]{Time-evolution_G}\n\\end{minipage}\n\\end{center}\n\\caption{For the initial condition \\eqref{eq:id_parabola} the initial particle configuration is obtained thanks to \\eqref{eq:dscr_IC}. The discrete density behaves suitably around all the particles except the first and the last one. See Remark \\ref{rem_zero_den}. }\n\\label{fig:cup_IF}\n\\end{figure}\n\nWe conclude with step functions with disconnected support. We first study the case\n\\begin{equation}\\label{eq:2s_0206}\n \t\\bar{\\rho}(x) = \\begin{cases}\n \t 0.2 \\quad x\\in\\left[-0.5,0\\right]\\\\\n 0.6 \\quad x\\in\\left[0.5,1\\right]\n \t\\end{cases},\n\\end{equation}\nshowing that the two bumps merge into a single step. Since symmetry is lost, it is not straightforward to determine where this final configuration will stabilize, but in \\figurename~\\ref{fig:0206_Ev} we can see that they still aggregate in a step of unitary density and support of length $m$.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{minipage}[c]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{PLNL2S0206_300_1}\n\\end{minipage}\n\\hspace{3mm}\n\\begin{minipage}[c]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{PLNL2S0206_300_601}\n\\end{minipage}\n\\hspace{3mm}\n\\begin{minipage}[c]{.70\\textwidth}\n\\vspace{3mm}\n\\includegraphics[width=.95\\textwidth]{Time-evolution_2s0206}\n\\end{minipage}\n\\end{center}\n\\caption{Evolution of a the two steps initial condition \\eqref{eq:2s_0206}. The pattern on the left is the one with less density and moves faster attracted by the one on the right and they merge in a single step of unitary density.}\n\\label{fig:0206_Ev}\n\\end{figure}\n\nMore interesting is the case of the following initial condition:\n\\begin{equation}\\label{eq:2s_11}\n \\bar{\\rho}(x) = \\begin{cases}\n \t 1 \\quad x\\in\\left[-0.5,0\\right]\\\\\n 1 \\quad x\\in\\left[0.5,1\\right]\n \t\\end{cases}.\n\\end{equation}\nNote that this is a stationary weak solution to \\eqref{eq:intro_continuum} but it is not an entropy solution.\nIn \\figurename~\\ref{fig:11_IF} we plot the time evolution of this initial configuration.\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\begin{minipage}[c]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{PLNL2S11_300_1}\n\\end{minipage}\n\\hspace{3mm}\n\\begin{minipage}[c]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{PLNL2S11_300_31}\n\\end{minipage}\n\\begin{minipage}[c]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{PLNL2S11_300_121}\n\\end{minipage}\n\\hspace{3mm}\n\\begin{minipage}[c]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{PLNL2S11_300_601}\n\\end{minipage}\n\\end{center}\n\\caption{Solution to \\eqref{CauchyProblem} with initial condition \\eqref{eq:2s_11}. The initial condition is a weak stationary solution to \\eqref{eq:intro_PDE}. However, the particle scheme converges to another solution, actually the unique entropy solution to \\eqref{CauchyProblem}. The picture shows how that two `internal' discontinuities are not admissible in the entropy sense, and they are therefore `smoothed' immediately after $t=0$.}\n\\label{fig:11_IF}\n\\end{figure}\n\n\n\\subsection*{Comparison with classical Godunov method}\nIn order to validate the previous simulations we compare the results with a classical Godunov method. The main issue in this case is to dealing with the two directions in the transport term. More precisely, since the kernel $K$ is an even function, we can rephrase \\eqref{eq:intro_continuum} as\n\\begin{equation}\\label{eq:god1}\n \\partial_t \\rho = \\partial_x(\\rho v(\\rho) )K_\\rho^{+}(x)+\\partial_x(\\rho v(\\rho)) K_\\rho^{-}(x)+\\rho v(\\rho) K''\\ast\\rho,\n\\end{equation}\nwhere\n\\begin{align*}\n & K_\\rho^{+}(x)=\\int_{x\\geq y}K'(x-y)\\rho(y)dy\\geq 0,\\\\\n & K_\\rho^{-}(x)=\\int_{x < y}K'(x-y)\\rho(y)dy\\leq 0.\n \\end{align*}\n The evolution of $\\rho$ is driven by two transport fields: $K_\\rho^{+}$ pushing the density from left to right $K_\\rho^{-}$ pushing the density from right to left. The third term on the r.h.s. in \\eqref{eq:god1} plays the role of a source term. Following the standard finite volume approximation procedure on $N$ cells $\\left[x_{j-\\frac12},x_{j+\\frac12}\\right]$, the discrete equation reads as\n\\[\n \\frac{d}{dt}\\tilde{\\rho}_j = K_\\rho^{+}(x_j)\\frac{F_{j+\\frac12}^{+}-F_{j-\\frac12}^{+}}{\\Delta x} + K_\\rho^{-}(x_j)\\frac{F_{j+\\frac12}^{-}-F_{j-\\frac12}^{-}}{\\Delta x} + \\tilde{\\rho}_j v(\\tilde{\\rho}_j)dK_j\n\\]\nwhere $F_{j+\\frac12}^{+}$ and $F_{j+\\frac12}^{-}$ are the Godunov approximations of the fluxes and $dK_j$ is an approximation of the convolution in the reaction term obtained via a quadrature formula. We integrate in time with a time step satisfying the CFL condition of the method. In \\figurename~\\ref{fig:Test} we compare the solutions obtained with the two methods in all the examples illustrated above.\n\n\n \\begin{figure}[!ht]\n\\begin{center}\n\\begin{minipage}[c]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{PLNL_God_1S03_300_601}\n\\end{minipage}\n\\hspace{3mm}\n\\begin{minipage}[c]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{PLNL_God_CUP_300_601}\n\\end{minipage}\n\\begin{minipage}[c]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{PLNL_God_2S0206_300_601}\n\\end{minipage}\n\\hspace{3mm}\n\\begin{minipage}[c]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{PLNL_God_2S11_300_601}\n\\end{minipage}\n\\end{center}\n\\caption{Comparison between particles (red stars) and Godunov (green continuous line) methods at final time $t=1$. On the top: solutions corresponding to initial condition \\eqref{eq:sing_step} (left) and \\eqref{eq:id_parabola}(right). On the bottom: final configurations for \\eqref{eq:2s_0206} (left) and \\eqref{eq:2s_11}.}\n\\label{fig:Test}\n\\end{figure}\n\n\\section*{Acknowledgements}\nThe authors acknowledge support from the EU-funded Erasmus Mundus programme `MathMods - Mathematical models in engineering: theory, methods, and applications' at the University of L'Aquila, from the Italian GNAMPA mini-project `Analisi di modelli matematici della fisica,\ndella biologia e delle scienze sociali', and from the local fund of the University\nof L'Aquila `DP-LAND (Deterministic Particles for Local And Nonlocal Dynamics).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nExperiments at the Fermilab Tevatron and the CERN Large Hadron Collider are engaged in searches\nfor the Higgs boson, a heavy scalar resonance predicted by\nthe Standard Model (SM). SM Higgs bosons are excitations of the neutral $CP$-even component\nof an $SU(2)_L$ weak isospin doublet field $H$ carrying unit hypercharge under $U(1)_Y$, whose\nvacuum expectation value (VEV) $v\/\\sqrt{2}$ is responsible for electroweak symmetry breaking \n(for reviews, see \\cite{Gunion:1989we,Djouadi:2005gi}).\n\nIf one or more new heavy resonances are discovered at the LHC, it will be imperative\nto pin down their quantum numbers relative to the expected properties of the SM Higgs.\nDetermination of the spin and $CP$ properties of a new resonance will be \nchallenging, although recent studies indicate that definitive results could be obtained\nat or around the moment of discovery, if the decay mode to $ZZ$ is observable \\cite{Cao:2009ah, DeRujula:2010ys,Gao:2010qx}.\n\nGiven a neutral $CP$-even spin 0 resonance $S$, one still needs to establish its \nelectroweak quantum numbers in order to reveal any possible connection to electroweak\nsymmetry breaking. This in turn requires information about the couplings between\n$S$ and pairs of vector bosons, which can be extracted from observations of $S$\ndecaying via $W^+W^-$, $ZZ$, $\\gamma\\gamma$, or $Z\\gamma$.\nTo an excellent approximation\nthe couplings of the SM Higgs boson to $WW$ and $ZZ$ derive from the dimension-four Higgs kinetic terms\nin the SM effective action, and are thus directly related to both the strength of electroweak\nsymmetry breaking and the electroweak quantum numbers of the Higgs field.\nThe couplings of the SM Higgs boson to $\\gamma\\gamma$, $Z\\gamma$, or a pair of gluons\nare elegantly derived from the observation that Higgs couplings in the SM \nare identical to those of a conformal-compensating dilaton in a theory where\nscale invariance is violated by the trace anomaly \\cite{Adler:1976zt, Djouadi:1993ji, Goldberger:2007zk, Bai:2009ms}.\nThus these couplings\nappear first at dimension five, with coefficients related to SM gauge group beta functions.\n\nIn this paper we exhibit a general analysis, up to operators of dimension five, of\nthe relation between the electroweak properties of $S$ and its decay branchings\nto $V_1V_2 = WW$, $ZZ$, $\\gamma\\gamma$, and $Z\\gamma$. We ignore decays into two\ngluons because of the folklore that these are unobservable, and postpone until\nthe end a discussion of extracting complementary information from \nvector boson fusion production of $S$ \\cite{Plehn:2001nj, Hankele:2006ma}. Nevertheless, we should emphasize that our analysis only involves the decays of the scalar into electroweak vector bosons, and hence is independent of the production mechanism of the scalar. \n\nA key feature of our analysis is the classification of Higgs look-alikes according\nto the custodial symmetry $SU(2)_C$. In the SM this global symmetry is the diagonal remnant\nafter electroweak symmetry breaking of an accidental global $SU(2)_L\\times SU(2)_R$,\nin which $SU(2)_L$ and the $U(1)_Y$ subgroup of $SU(2)_R$ are gauged. \nCustodial symmetry implies $\\rho\\equiv m_W^2\/(m_Zc_w)^2 = 1$ \\cite{Sikivie:1980hm}, where $c_w$ is\nthe cosine of the weak mixing angle. Experimentally $\\rho$ is constrained to be\nvery close to one \\cite{Amsler:2008zzb}, implying either that the full scalar sector\nrespects $SU(2)_C$, or that there are percent-level cancellations unmotivated by\nsymmetry arguments. In our analysis we will assume that unbroken\n$SU(2)_C$ is built into the scalar sector.\n\nWe consider $S$ arising from one of the neutral $CP$-even components of arbitrary \nspin 0 multiplets of $SU(2)_L\\times SU(2)_R$. The case of a singlet under\n$SU(2)_L\\times SU(2)_R$ is special, since then no $SV_1V_2$ couplings can\nappear from operators of dimension four. All other cases can be grouped\naccording to whether the neutral scalar components transform as a singlet\nor a 5-plet under $SU(2)_C$. Again under the assumption that the full scalar\npotential respects the custodial symmetry, these three ``pure cases'' only give\nrise to one nontrivial mixed case, i.e. when $S$ from a $SU(2)_L\\times SU(2)_R$\nsinglet mixes with another $S$ from the $SU(2)_C$ singlet part of a\n$SU(2)_L\\times SU(2)_R$ nonsinglet.\n\nGiven the framework just described, we are able to enumerate all possible\ndeviations from the SM expectations for decays of a Higgs look-alike into\npairs of electroweak bosons. These deviations are typically quite large,\nand thus accessible to experiment at the LHC. Furthermore\nthe deviations exhibit patterns that point towards\nparticular non-SM scenarios. It would therefore be\npossible with LHC data to rule out a new scalar resonance as the agent (or the\nsole agent) of electroweak symmetry breaking.\nThis possibility emphasizes the importance of observing all\nfour $V_1V_2$ decay channels at the LHC with maximum sensitivity.\nWe give examples of Higgs imposters that meet SM expectations\nfor branching fractions into two of the electroweak $V_1V_2$ modes,\nonly revealing their ersatz nature in the other two $V_1V_2$ decay modes. The approach taken here is complimentary to that in Refs.~\\cite{Cao:2009ah, DeRujula:2010ys,Gao:2010qx}, where angular correlations and total decay width were used to distinguish Higgs look-alikes. A fully global analysis using all of the available decay and production observables in each channel will of course give superior results to the simple counting experiments described here.\n\n\nIn Sect. \\ref{sect:section2} we describe the dimension four couplings\nof an arbitrary neutral $CP$-even scalar charged under $SU(2)_L\\times U(1)_Y$\nto $WW$ or $ZZ$; we also describe the general dimension five couplings\nof a $SU(2)_L\\times U(1)_Y$ singlet to two electroweak vector bosons.\nSect. \\ref{sect:section3} contains the general framework based on custodial\nsymmetry. In Sect. \\ref{sect:section4} we provide general results on the patterns\nof $S\\to V_1V_2$ branching fractions, as well as discussing some interesting special cases.\nFurther discussion and outlook are in Sect. \\ref{sect:section5}, with some general formulae\nfor off-shell decays relegated to an appendix.\n\n\n\\section{Scalar Couplings with $V_1V_2$}\n\\label{sect:section2}\n\nIn this section we consider scalar couplings with two electroweak gauge bosons $V_1V_2$, where $V_1V_2=\\{WW, ZZ, Z\\gamma, \\gamma\\gamma\\}$. Such couplings are dictated by the electroweak quantum numbers of the scalar $S$. We will write down $SU(2)_L\\times U(1)_Y$ invariant operators giving rise to the $SV_1V_2$ couplings at the leading order. For an electroweak nonsinglet, the leading operator is the kinetic term of the scalar, assuming $S$ receives a VEV, while for the singlet scalar the leading operator starts at dimension five. \n\n\nFor nonsinglet scalars, the leading contribution to the $SV_1V_2$ coupling arises from spontaneous breaking of $SU(2)_L\\times U(1)_Y$ down to $U(1)_{em}$ via the Higgs mechanism, when $S$ develops a VEV. It is possible to derive the general coupling when there are multiple scalars in arbitrary representations of the $SU(2)_L$ group \\cite{Tsao:1980em, Haber:1999zh}. Using the notation $\\phi_k$ for scalars in the complex representations and $\\eta_i$ for scalars in the real representations\\footnote{A real representation is defined as a real multiplet with integer weak isospin and $Y=0$. }, the kinetic terms are\n\\begin{equation}\n\\sum_k {\\rm Tr} (D_\\mu\\phi_k)^\\dagger (D^\\mu\\phi_k) + \\frac12 \\sum_i{\\rm Tr} (D_\\mu \\eta_i)(D^\\mu\\eta_i) \\ ,\n\\end{equation}\nwhere \n\\begin{equation}\nD_\\mu = \\partial_\\mu -ig W_\\mu^a T^a - \\frac{i}2 g' B_\\mu Y \n\\end{equation}\nis the covariant derivative. In the above\n$W_\\mu^a$ and $g$ are the $SU(2)_L$ gauge bosons and gauge coupling, respectively, while $B_\\mu$ and $g'$ are the $U(1)_Y$ gauge boson and gauge coupling. In addition, $T^a$ are the $SU(2)_L$ generators in the corresponding representation of the scalar, and $Y$ is the hypercharge generator. For complex representations we work in the basis where $T^3$ and $Y$ are diagonal. After shifting the scalar fields by their VEV's: $\\phi_k\\to \\phi_k +\\langle \\phi_k \\rangle$ and $\\eta_i\\to \\eta_i + \\langle \\eta_i \\rangle$, where the VEV's are normalized as follows\n\\begin{equation}\n\\label{eq:vevnorm}\n{\\rm Tr}(\\langle \\phi_k \\rangle^\\dagger \\langle \\phi_k \\rangle)=\\frac12 v_k^2 \\ , \\qquad {\\rm Tr}(\\langle \\eta_i \\rangle^\\dagger \\langle \\eta_i \\rangle)= \\tilde{v}_i^2 \\ ,\n\\end{equation}\nelectroweak symmetry is broken and $W$ and $Z$ bosons become massive. The mass eigenstates are defined as\n\\begin{eqnarray}\nW^\\pm &=& \\frac1{\\sqrt{2}}(W^1 \\mp i W^2)\\ , \\nonumber \\\\ \n\\label{eq:eweigen}\n\\left( \\begin{array}{cc}\n W^3\\\\\n B \n \\end{array}\\right) &=& \n \\left( \\begin{array}{cc}\n c_w & s_w \\\\\n -s_w & c_w\n \\end{array}\\right)\n \\left( \\begin{array}{c}\n Z\\\\\n A\n \\end{array}\\right) ,\n\\end{eqnarray}\nwhere the sine and cosine of the weak mixing angle are $c_w = {g}\/{\\sqrt{g^2+g^{\\prime 2}}}$ and $s_w = {g^\\prime}\/{\\sqrt{g^2+g^{\\prime 2}}}$, respectively. Notice the unbroken $U(1)_{em}$ leads to the conditions\n\\begin{equation}\n\\left(T^3+\\frac12 Y\\right) \\langle \\phi_k \\rangle= 0 \\ , \\qquad T^3 \\langle \\eta_i \\rangle = 0 \\ .\n\\end{equation}\n Using $T^3 \\langle \\phi_k \\rangle = -Y \\langle \\phi_k \\rangle\/2$ it is possible to express the mass terms of the $W$ and $Z$ in terms of the eigenvalues $T^2 \\langle \\phi_k\\rangle \\equiv T^aT^a \\langle \\phi_k\\rangle = T_k(T_k+1) \\langle \\phi_k\\rangle$:\n\\begin{eqnarray}\n\\label{eq:mwgen}\nm_W^2&=&\\frac18 g^2\\sum_{k} \\left[4T_k(T_k+1) - Y_k^2\\right] v_k^2 + \\frac1{2} g^2\\sum_{i} T_i(T_i+1) \\tilde{v}_i^2 \\ , \\\\\n\\label{eq:mzgen}\nm_Z^2 &=&\\frac14\\, \\frac{g^2}{c_w^2} \\sum_k Y_k^2 v_k^2 \\ ,\n\\end{eqnarray}\nwhere $Y_k$ and $Y_i$ are the hypercharges of $\\phi_k$ and $\\eta_i$. Couplings of the real component of the neutral scalar with the $W$ and $Z$ can be read off by the replacement $v_k \\to v_k(1+\\phi_k^0\/v_k)$ and $\\tilde{v}_i \\to \\tilde{v}_i(1+ \\eta_i^0\/\\tilde{v}_i)$ in the mass terms:\n\\begin{equation}\n\\Gamma^{\\mu\\nu}_{SV_1V_2}= g_{SV_1V_2}\\, g^{\\mu\\nu} \\ ,\n\\end{equation}\nwhere\\footnote{We include a factor of 2! when there are two identical particles in the vertex.}\n\\begin{equation}\n\\label{eq:genwwzzcoup}\n\\begin{array}{ll}\n\\displaystyle g_{\\phi_k WW} = \\frac14 g^2 \\left[4T_k(T_k+1) - Y_k^2\\right] v_k \\ , \\phantom{ccc} &\n\\displaystyle g_{\\phi_k ZZ} = \\frac12\\, \\frac{g^2}{c_w^2} Y_k^2 v_k \\ , \\\\\n\\displaystyle g_{\\eta_i WW} = g^2 T_i(T_i+1) \\tilde{v}_i \\ , &\n\\displaystyle g_{\\eta_i ZZ} = 0 \\ .\n\\end{array}\n\\end{equation}\nNotice that a scalar in a real representation only couples to $WW$ but not $ZZ$. Moreover,\nat this order there is no scalar coupling with $Z\\gamma$ and $\\gamma\\gamma$, which are only induced at the loop level.\n\nAt this point it is worth discussing a few examples of the $SU(2)_L$ representations appearing in the literature. The benchmark is of course the doublet Higgs scalar $H$ with $(T,Y)=(1\/2,1)$. Couplings of the $CP$-even neutral Higgs $h$ with two electroweak bosons are\n\\begin{equation} \n\\label{eq:hwwzz}\ng_{hWW} = \\frac12 g^2 v_h \\ , \\quad g_{hZZ}= \\frac12\\, \\frac{g^2}{c_w^2} v_h \\ , \\quad g_{hZ\\gamma}= g_{h\\gamma\\gamma} = 0 \\ .\n\\end{equation}\nTwo more popular examples are the real triplet scalar $\\phi$ and the complex triplet scalar $\\Phi$ with $(T,Y)=(1,0)$ and $(T,Y)=(1,2)$, respectively, for which the couplings are\n\\begin{eqnarray}\n\\label{eq:phiwwzz}\n&& g_{\\phi^0WW} = 2 g^2 v_\\phi \\ , \\quad g_{\\phi^0ZZ}= g_{\\phi^0Z\\gamma}= g_{\\phi^0\\gamma\\gamma} = 0 \\ , \\\\\n\\label{eq:Phiwwzz}\n&& g_{\\Phi^0WW} = g^2 v_\\Phi \\ , \\quad g_{\\Phi^0ZZ}= 2 \\frac{g^2}{c_w^2} v_\\Phi \\ , \\quad g_{\\Phi^0Z\\gamma}= g_{\\Phi^0\\gamma\\gamma} = 0 \\ .\n\\end{eqnarray}\nWe see that the $SV_1V_2$ couplings are distinctly different for scalars carrying different electroweak quantum numbers, which would give rise to different patterns of decay branching ratios into two electroweak vector bosons. However, it is well known that $\\phi$ and $\\Phi$ individually violate the custodial symmetry and leads to unacceptably large corrections to the $\\rho$ parameter unless the VEV is extremely small, on the order of a few GeV \\cite{Amsler:2008zzb, Boughezal:2004ef, Awramik:2006uz}.\n\n\n\nFor a singlet scalar $s$, the $sV_1V_2$ couplings do not come from the Higgs mechanism. Instead, they originate from the following two dimension-five operators at the leading order:\n\\begin{equation}\n\\label{eq:singletsu2}\n\\kappa_2 \\frac{s}{4m_s} W_{\\mu\\nu}^a W^{a\\, \\mu\\nu} + \\kappa_1\\frac{s}{4m_s} B_{\\mu\\nu} B^{\\mu\\nu} \\ ,\n\\end{equation}\nwhere the singlet $s$ is assumed to be $CP$-even.\n We have normalized the dimensionful couplings to the mass of the singlet $m_s$, although in general an unrelated mass scale could enter.\nIn terms of the mass eigenstate in Eq.~(\\ref{eq:eweigen}), the operators become\n\\begin{eqnarray}\n&& \\kappa_2 \\frac{s}{2m_s} W_{\\mu\\nu}^+ W^{-\\, \\mu\\nu} + (\\kappa_2 c_w^2 + \\kappa_1 s_w^2) \\frac{s}{4m_s} Z_{\\mu\\nu}Z^{\\mu\\nu} \\nonumber \\\\\n&& \\quad \n + 2 c_w s_w \\frac{s}{4m_s} (\\kappa_2 - \\kappa_1) Z_{\\mu\\nu}F^{\\mu\\nu} + (\\kappa_2 s_w^2 +\\kappa_1 c_w^2) \\frac{s}{4m_s} F_{\\mu\\nu} F^{\\mu\\nu} \\ .\n\\end{eqnarray}\nfrom which we obtain the following couplings:\n\\begin{eqnarray}\n\\label{eq:stensor}\n&& \\Gamma^{\\mu\\nu}_{sV_1V_2}= \\frac{g_{sV_1V_2}}{m_s} (p_{V_1}\\cdot p_{V_2} g^{\\mu\\nu} -p_{V_1}^\\nu p_{V_2}^\\mu) \\ , \\\\\n \\label{eq:singscoup}\n&& \\begin{array}{ll}\ng_{sWW}=\\kappa_2 \\ , \\phantom{ccc} & g_{sZZ}=(\\kappa_2 c_w^2 + \\kappa_1 s_w^2)\\ , \\\\\ng_{sZ\\gamma}= c_w s_w (\\kappa_2-\\kappa_1) \\ , \\phantom{ccc} & g_{s\\gamma\\gamma} = (\\kappa_2 s_w^2 + \\kappa_1 c_w^2)\\ .\n\\end{array}\n\\end{eqnarray}\nOne sees immediately that branching ratios following from these couplings are distinctly different from those coming from the Higgs mechanism. Moreover, the four couplings are controlled by only two unknown coefficients $\\kappa_2$ and $\\kappa_1$. So measurements of any two couplings would allow us to predict the remaining couplings, which, if verified experimentally, would be a striking confirmation of the singlet nature of the scalar resonance.\n\n\nIt is worth commenting that the coefficients $\\kappa_2$ and $\\kappa_1$ are related to the one-loop beta functions of $SU(2)_L$ and $U(1)_Y$ gauge groups, respectively, via the Higgs low-energy theorem \\cite{Ellis:1975ap,Shifman:1979eb}:\n\\begin{eqnarray}\n \\beta_2(g) &=& - \\frac{g^3}{(4\\pi)^2} \\left(\\frac{11}3 C_2(G) - \\frac13 n_s C(r)\\right) \\ , \\\\\n \\beta_1(g') &=& +\\frac{g'^3}{(4\\pi)^2} \\frac13 Y^2 n_s^\\prime \\ .\n \\end{eqnarray}\n In the above the Casmir invariants are defined as\n \\begin{equation}\n {\\rm Tr}[t^a_r t^b_r] = C(r)\\delta^{ab} \\ , \\qquad t_G^a t_G^b = C_2(G) \\cdot \\mathbf{1} \\ ,\n \\end{equation}\nwhile $n_s$ is the number of scalars in the complex representation $r$ and $n_s^\\prime$ is the number of scalars charged under $U(1)_Y$. \n Such a connection has been exploited to compute that partial width of $h\\to gg$ and $h\\to \\gamma\\gamma$ in the standard model \\cite{Ellis:1975ap,Shifman:1979eb}, as well as to derive the constraints on the Higgs effective couplings \\cite{Low:2009di}. For our purpose such relations serve to demonstrate that the special case of $\\kappa_2=\\kappa_1$, where the ratio of singlet couplings with $WW$ and $ZZ$ coincides with the standard model expectation, in general requires a conspiracy between the two one-loop beta functions to cancel each other. In this case, however, the coupling to $\\gamma\\gamma$ is identical to the coupling to $ZZ$. On the other hand, depending on whether the $SU(2)_L$ running is asymptotically free, $\\kappa_2$ and $\\kappa_1$ could have either the same or opposite sign, resulting in a reduction (same sign) or enhancement (opposite sign) of the $Z\\gamma$ width relative to $ZZ$ and $\\gamma\\gamma$ channels. It is also possible that $\\kappa_2=0$, resulting in a very suppressed decay width into $WW$. We will discuss further these special cases in Sect.~\\ref{sect:section4}.\n\n\n\\section{Implications of Custodial Invariance}\n\\label{sect:section3}\n\nWe have seen in the previous section that scalar couplings with two electroweak bosons are uniquely determined by the $SU(2)_L\\times U(1)_Y$ quantum number of the scalar involved. For nonsinglet scalars the leading contribution to the $SV_1V_2$ couplings come from the kinetic terms via the Higgs mechanism, which in turn are related to the contribution of each scalar VEV to the masses of the $W$ and $Z$ bosons. However, the ratio of the $W$ and $Z$ masses are measured very precisely and related to the precision electroweak observable $\\rho=m_W^2\/(m_Z c_w)^2$, which is determined at the tree-level by the structure of the scalar sector in a model. Experimentally $\\rho$ is very close to 1 at the percent level \\cite{Amsler:2008zzb}, which severely constrains the electroweak quantum number of any scalar which develops a VEV. \n\nIt has been known for a long time that the Higgs sector in the standard model possesses an accidental global symmetry $SU(2)_L\\times SU(2)_R$, in which the $SU(2)_L$ and $T_R^3$ are gauged and identified with the weak isospin and the hypercharge, respectively. After electroweak symmetry breaking the global symmetry is broken down to the diagonal $SU(2)$, which remains unbroken. The unbroken $SU(2)$ is dubbed the custodial symmetry in Ref.~\\cite{Sikivie:1980hm}, where it was shown the relation $\\rho=1$ is protected by the custodial symmetry $SU(2)_C$. In this section we classify scalar interactions with two electroweak vector bosons according to the $SU(2)_C$ quantum number of the scalar.\\footnote{In the SM the custodial invariance is explicitly broken by fermion masses, since the up-type and down-type fermions have different masses. However, this breaking is oblique in nature and only feeds into the gauge boson masses at the loop-level. Thus we do not include this particular effect in our discussion.}\n\nThere are two possibilities for the scalar sector of a model to preserve the $SU(2)_C$ symmetry. One could find a single irreducible representation of $SU(2)_L\\times U(1)_Y$ which realizes $\\rho=1$. In this case there is only one neutral $CP$-even scalar and the $W$ and $Z$ obtain masses from a single source, the VEV of the neutral scalar $S^0$.\nFrom Eqs.~(\\ref{eq:mwgen}) and (\\ref{eq:mzgen}) we see the condition to realize this possibility is\n\\begin{equation}\n(2T+1)^2-3Y^2 =1\\ .\n\\end{equation}\nAn obvious solution is the Higgs doublet $(T,Y)=(1\/2,1)$, beyond which the next simplest case is $(T,Y)=(3,4)$ \\cite{Tsao:1980em}. However, it is clear that, since there is only one source for the masses of $W$ and $Z$ bosons, the $SV_1V_2$ couplings are derived by replacing $m_V\\to m_V(1+S^0\/v)$ in the mass term, which results in\n\\begin{equation}\n\\label{eq:smwwzz}\ng_{S^0WW}= 2\\frac{m_W^2}{v} \\ , \\quad g_{S^0ZZ}=2\\frac{m_Z^2}{v} , \\quad g_{S^0Z\\gamma}=g_{S^0\\gamma\\gamma}=0 \\ .\n\\end{equation}\nIn other words, when there is only a single source for the mass of electroweak bosons, the custodial symmetry uniquely determines the ratio of the scalar couplings to $WW$ and $ZZ$ to be\n\\begin{equation}\n\\label{eq:gswwzz}\n\\frac{g_{S^0WW}}{g_{S^0ZZ}} = \\frac{m_W^2}{m_Z^2} = c_w^2\\ ,\n\\end{equation}\nregardless of the $SU(2)_L\\times U(1)_Y$ quantum number of the scalar involved. In the next section we will see that Eq.~(\\ref{eq:gswwzz}) predicts the ratio of the decay branching fractions into $WW$ and $ZZ$ to be roughly two-to-one, which is the case in the SM with a Higgs doublet. \n\n\nThe second possibility is to consider multiple scalars all contributing to the $W$ and $Z$ masses through the Higgs mechanism in such a way that, although individually the custodial invariance is not respected, the $\\rho$ parameter remains 1 due to cancellations between the multiple scalars. This would happen if the scalars sits in a complete multiplet $(\\mathbf{M}_L, \\mathbf{N}_R)$ of the full $SU(2)_L\\times SU(2)_R$ group, where $\\mathbf{M}$ and $\\mathbf{N}$ are positive integers labeling the $M$-dimensional and $N$-dimensional irreducible representations of $SU(2)_L$ and $SU(2)_R$, respectively. Recall that $SU(2)_L$ is fully gauged and identified with the weak isospin, while $T_R^3$ is gauged and corresponds to the $U(1)_Y$ such that $T^3_R=Y\/2$, which implies the electric charge is exactly $T_C^3$:\n\\begin{equation}\nQ = T_L^3 +\\frac{Y}2 = T_L^3+T_R^3 = T_C^3 \\ .\n\\end{equation}\nTherefore, all neutral components in the scalar multiplets have $T_C^3=0$. On the other hand, unbroken custodial symmetry requires that only $SU(2)_C$ singlets are allowed to have a VEV. In other words, the scalar representation $(\\mathbf{M}_L,\\mathbf{N}_R)$ must contain a state with $T_C=0$, where $T_C$ is the eigenvalue labeling the quadratic Casmir operator $T_C^aT_C^a = T_C (T_C+1) \\openone$. Since $T_C$ satisfies\n\\begin{equation}\n|M-N| \\le T_C \\le M+N\\ ,\n\\end{equation}\nwe conclude that $\\rho=1$ is possible only when $M=N$ and the scalar must furnish the $(\\mathbf{N}_L, \\mathbf{N}_R)$ representation. \n\nThe trivial representation $(\\mathbf{1}_L, \\mathbf{1}_R)$ is a singlet scalar under $SU(2)_L\\times U(1)_Y$, which was considered in the previous section. In the following we focus on the non-trivial representations, in which the $SV_1V_2$ couplings arise from the Higgs mechanism after the electroweak symmetry breaking. We will represent a scalar $\\Phi_N$ in the $(\\mathbf{N}_L, \\mathbf{N}_R)$ multiplet in a $N\\times N$ matrix whose column vectors are $N$-plets under $SU(2)_L$. The kinetic term of $\\Phi_N$ is \n\\begin{eqnarray}\n\\label{eq:phinkin}\n&& \\frac12 {\\rm Tr}\\left[ (D^\\mu\\Phi_N)^\\dagger D_\\mu\\Phi_N\\right] \\ ,\\\\ \n&& D_\\mu\\Phi_N = \\partial_\\mu \\Phi_N + i gW_\\mu^a T^a \\Phi_N - i g' B_\\mu \\Phi_N T^3 \\ ,\n\\end{eqnarray}\nwhere $T^a$ are generators of $SU(2)$ in the $N$-plet representation.\nWhen $\\Phi_N$ develops a VEV in a custodially invariant fashion\\footnote{When $N$ is an odd integer, $\\Phi_N$ contains a real $SU(2)_L$ $N$-plet with zero hypercharge, whose VEV has a different normalization from that in Eq.~(\\ref{eq:vevnorm}): $\\tilde{v}=v\/\\sqrt{2}$.}\n\\begin{equation}\n\\label{eq:phinvev}\n\\langle \\Phi_N \\rangle = \\frac{v}{\\sqrt{2}}\\,\\openone \\ ,\n\\end{equation}\nelectroweak symmetry breaking occurs and $\\rho=1$ at the tree-level.\n\nIn general various scalars in $\\Phi_N$ could mix with one another and the mass eigenstates do not necessarily have well-defined $SU(2)_L\\times U(1)_Y$ quantum numbers. However, it is highly desirable that the scalar potential respects the custodial symmetry so as to be consistent with $\\rho=1$, which we assume to be the case. \nThen scalars with different $SU(2)_C$ quantum numbers do not mix and all the mass eigenstates have definite $SU(2)_C$ quantum numbers, according to which\nwe will proceed to classify the $SV_1V_2$ interactions. The $(\\mathbf{N}_L, \\mathbf{N}_R)$ representation decomposes under the unbroken $SU(2)_C$ as\n\\begin{equation}\n(\\mathbf{N}_L, \\mathbf{N}_R) = \\mathbf{1} \\oplus \\mathbf{3} \\oplus \\cdots \\oplus \\mathbf{2N-3} \\oplus \\mathbf{2N-1} \\ .\n\\end{equation}\nScalars in the $(4k+1)$-plet are $CP$-even and those in the $(4k+3)$-plet are $CP$-odd. We assume no $CP$-violation in the scalar sector and neglect the $CP$-odd scalar interactions. Since we are interested in interactions with two electroweak gauge bosons, it is worth recalling that $W_\\mu^a$ and $B_\\mu$ transform as (part of) $(\\mathbf{3}_L, \\mathbf{3}_R)$ under $SU(2)_L\\times SU(2)_R$. Therefore the only possible $SU(2)_C$ quantum numbers of a system of two electroweak gauge bosons are a singlet, a triplet, or a 5-plet, which implies the scalar must also be in one of the above three representations in order to have a non-zero coupling with two electroweak bosons. We conclude that $CP$-even $SV_1V_2$ interactions are allowed only when the scalar is either a $SU(2)_C$ singlet or a 5-plet. This is equivalent to saying two spin-1 objects can only couple to either a spin-0 or a spin-2 object. Interactions of two electroweak bosons with scalars in higher representations of $SU(2)_C$ all vanish.\n\n\nLet's define the the neutral component of a custodial $n$-plet as $H_n^0= h_n^0 X_n^0$, where $h_n^0$ is the neutral scalar field and $X_n^0$ is a $N\\times N$ diagonal matrix satisfying\\footnote{Recall that neutral scalars have $T_C^3=T_L^3+T_R^3=0$ and hence belong to the diagonal entries in $\\Phi_N$.} \n\\begin{equation}\n[T^a T^a, X_n^0] = n(n+1) X_n^0 \\ , \\qquad \\qquad [T^3, X_n^0] = 0 \\ , \\qquad \\qquad {\\rm Tr}(X_n^0 X_n^0) = 1 \\ .\n\\end{equation}\nAs emphasized already, only $h_1^0$ is allowed to develop a VEV. From Eq.~(\\ref{eq:phinvev}) we see that $\\langle h_1^0\\rangle = \\sqrt{N\/2}\\,v$ and $X_1^0=\\openone\/\\sqrt{N}$, which implies all other neutral components must be (diagonal) traceless matrices:\n\\begin{equation}\n{\\rm Tr}(X_n^0 X_1^0) = {\\rm Tr} (X_n^0) = 0 \\ , \\qquad n \\ge 2 \\ .\n\\end{equation}\nThe VEV of $h_1^0$ gives rise to the following masses from the kinetic term of $\\Phi_N$ :\n\\begin{eqnarray}\n\\label{eq:mwcus}\nm_W^2&=&\\frac14 g^2 v^2\\, {\\rm Tr}\\left[T^aT^a - T^3T^3\\right] = \\frac1{24} g^2 v^2 N(N^2-1) \\ , \\\\\n\\label{eq:mzcus}\nm_Z^2&=&\\frac12 \\, \\frac{g^2}{c_w^2} v^2\\, {\\rm Tr}\\left[ T^3 T^3 \\right] =\\frac1{24} \\frac{g^2}{c_w^2} v^2 N(N^2-1) \\ ,\n\\end{eqnarray}\nwhich exhibits $\\rho=1$. It can be verified explicitly that Eqs.~(\\ref{eq:mwcus}) and (\\ref{eq:mzcus}) are consistent with Eqs.~(\\ref{eq:mwgen}) and (\\ref{eq:mzgen}). Interactions of $h_{n}^0$, $n=1,5$, with electroweak bosons can be obtained by setting $\\Phi_N = (v\/\\sqrt{2})\\openone + H_{n}^0$ in Eq.~(\\ref{eq:phinkin}):\n\\begin{eqnarray}\ng_{h_{n}^0WW} &=& \\frac1{\\sqrt{2}} \\, g^2 v\\, {\\rm Tr}\\left[X_{n}^0 (T^aT^a - T^3T^3)\\right] \\ , \\\\\ng_{h_{n}^0ZZ} &=& \\sqrt{2}\\, \\frac{g^2}{c_w^2} v\\, {\\rm Tr}\\left[X_{n}^0\\, T^3 T^3 \\right] \\ .\n\\end{eqnarray}\nFor the custodial singlet, $n=1$ and $X_1^0 = \\openone\/\\sqrt{N}$, we obtain\n\\begin{eqnarray}\n\\label{eq:h10ww}\ng_{h_{1}^0WW} &=& \\frac1{\\sqrt{2N}} \\, g^2 v\\, {\\rm Tr}\\left[ (T^aT^a - T^3T^3)\\right] = 2\\sqrt{\\frac{2}N}\\frac{m_W^2}{v} \\ , \\\\\n\\label{eq:h10zz}\ng_{h_{1}^0ZZ} &=& \\sqrt{\\frac{2}N}\\, \\frac{g^2}{c_w^2} v\\, {\\rm Tr}\\left[T^3 T^3 \\right] =2\\sqrt{\\frac{2}N} \\frac{m_Z^2}{v} \\ ,\n\\end{eqnarray}\nwhich is a demonstration of the statement that any custodial singlet (apart from the one in the trivial representation $(\\mathbf{1}_L,\\mathbf{1}_R)$) must have couplings to the $WW$ and $ZZ$ bosons with a fixed ratio as in Eq.~(\\ref{eq:gswwzz}). On the other hand, since $X_{5}^0$ is a traceless diagonal matrix, we have\n\\begin{equation}\n{\\rm Tr}[X_{5}^0 T^a T^a] \\propto {\\rm Tr}[X_{5}^0 \\openone] = 0 \\ .\n\\end{equation}\nThen the couplings are\n\\begin{eqnarray}\n\\label{eq:g2kww}\ng_{h_{5}^0WW} &=& - \\frac1{\\sqrt{2}} \\, g^2 v\\, {\\rm Tr}\\left[X_{5}^0 T^3T^3\\right] \\ , \\\\\n\\label{eq:g2kzz}\ng_{h_{5}^0ZZ} &=& \\sqrt{2}\\, \\frac{g^2}{c_w^2} v\\, {\\rm Tr}\\left[X_{5}^0 T^3 T^3 \\right] \\ ,\n\\end{eqnarray}\nwhich turn out to have a ratio\n\\begin{equation} \n\\frac{g_{h_{5}^0WW}}{g_{h_{5}^0ZZ}} = - \\frac{c_w^2}2 \\ \n\\end{equation}\nthat is different from the ratio of $c_w^2$ for the custodial singlet $h_1^0$. We emphasize that the ratios of the couplings only depend on the $SU(2)_C$ quantum numbers, and not on the particular $(\\mathbf{N}_L, \\mathbf{N}_R)$ representation.\n\n\n\nAgain we discuss a few examples. The canonical example is the familiar Higgs doublet: $(\\mathbf{2}_L, \\mathbf{2}_R)=\\mathbf{1}\\oplus\\mathbf{3}$, where the complex $SU(2)_L$ doublet decomposes into a singlet and a triplet under $SU(2)_C$. The $SU(2)_C$ singlet is the neutral $CP$-even Higgs, $h$, which develops a VEV and breaks the electroweak symmetry, while the triplet contains the Goldstone bosons eaten by the $W$ and $Z$. Our general expressions in Eqs.~(\\ref{eq:h10ww}) and (\\ref{eq:h10zz}) are consistent with those in Eq.~(\\ref{eq:gswwzz}) for $N=2$.\nAnother example appearing in the literature \\cite{Georgi:1985nv, Chanowitz:1985ug, Gunion:1989ci} is the $(\\mathbf{3}_L, \\mathbf{3}_R)$ representation. Under $SU(2)_L\\times U(1)_Y$ it consists of a real electroweak triplet with $(T,Y)=(1,0)$ and a complex electroweak triplet with $(T,Y)=(1,2)$, whose individual couplings to two electroweak bosons were summarized in Eqs.~(\\ref{eq:phiwwzz}) and (\\ref{eq:Phiwwzz}). In this case, the $SU(2)_C$ quantum numbers are $(\\mathbf{3}_L, \\mathbf{3}_R)=\\mathbf{1}\\oplus\\mathbf{3} \\oplus \\mathbf{5}$, which contains two $CP$-even neutral scalars in the singlet and the 5-plet and one $CP$-odd scalar in the triplet \\cite{Georgi:1985nv}. Our expressions for couplings of the singlet and the 5-plet with $WW$ and $ZZ$ are consistent with those in Refs.~\\cite{Georgi:1985nv, Chanowitz:1985ug, Gunion:1989ci}.\\footnote{Although $\\rho=1$ at the tree-level in this model, constraints from $Zb\\bar{b}$ vertex require $v \\sim 50$ GeV \\cite{Haber:1999zh}.}\n\nIt is also possible that the scalar sector of a model has multiple neutral scalar particles. In this case only scalars within the same $SU(2)_C$ multiplet are allowed to mix in order to preserve $\\rho=1$. Then the ratio of the $SV_1V_2$ couplings in the mass eigenstate depends only on the $SU(2)_C$ quantum number and not on the mixing angle at all, except when there exists an electroweak singlet scalar $s$ which couples to $V_1V_2$ through the higher dimensional operators in Eq.~(\\ref{eq:singletsu2}). In this case, it is necessary to include the loop-induced couplings of $h_1^0$ with $Z\\gamma$ and $\\gamma\\gamma$ since they are in the same order as the $sV_1V_2$ couplings. Furthermore, there could be a higher dimensional operator of the form $s|D_\\mu \\Phi_N|^2$, with the coefficient $\\kappa_s\/m_s$, which gives rise to the coupling $sV_1^\\mu V_{2\\, \\mu}$ in addition to those in Eq.~(\\ref{eq:stensor}). Even so, there are only seven unknown parameters: $g_{h_1^0WW}$, $g_{h_1^0Z\\gamma}$, $g_{h_1^0\\gamma\\gamma}$, $\\kappa_1$, $\\kappa_2$, $\\kappa_s$, and the mixing angle between $h_1^0$ and $s$, while one could measure a total of eight branching fractions of two mass eigenstates decaying into $V_1V_2$. Therefore there are enough experimental measurements to not only solve for the seven unknowns, but also test the hypothesis of mixing between $h_1^0$ and $s$. If we observe multiple scalars whose couplings to two electroweak bosons do not follow from that of $h_1^0$ or $h_5^0$, one would be motivated to consider mixing of $h_1^0$ with an electroweak singlet scalar.\n\n\\section{Partial Widths of $S\\to V_1V_2^{(*)}$}\n\\label{sect:section4}\n\nIn this section we compute the partial decay width of $S\\to V_1 V_2^{(*)}$ using the couplings derived in the previous sections. Given that the mass of the scalar could be lighter than the $WW$ threshold, we include the case of $S\\to V_1V_2^*$ when one of the vector bosons is off-shell. Although decays of an electroweak doublet scalar into two electroweak bosons have been computed both in the on-shell \\cite{Lee:1977eg} and off-shell \\cite{Rizzo:1980gz, Keung:1984hn, Grau:1990uu} cases, off-shell decays of an electroweak singlet scalar into two electroweak bosons do not appear to have been considered to the best of our knowledge. In the appendix we compute the decay width of a massive spin-0 particle into two off-shell vector bosons, which serve as the basis of the discussion in what follows.\n\n\nFrom Eq.~(\\ref{eq:onshellfinal}) in the appendix decays of non-electroweak singlet scalars into $WW$ and $ZZ$ are given by\n\\begin{equation}\n\\label{eq:hvvgen}\n\\Gamma(S\\to V_1V_2) = \\delta_V \\frac1{128\\pi} \\frac{|\\tilde{g}_{hV_1V_2}|^2}{x^2 m_S} \\sqrt{1-4x}\\, (1-4x+12x^2) \\ ,\n\\end{equation}\nwhere $x={m_{V}^2}\/{m_S^2}$, $\\delta_W=2$ and $\\delta_Z=1$. In the limit $x^2 \\ll 1$, which is a good approximation if $m_S$ is much larger than the $ZZ$ threshold, the pattern of a scalar decaying into two electroweak vector bosons is \n\\begin{equation}\n\\label{eq:doubpatt}\n\\Gamma(S\\to WW) : \\Gamma(S\\to ZZ) :\\Gamma(S\\to Z\\gamma) : \\Gamma(S\\to \\gamma\\gamma) \\ \\ \\approx \\ 2\\frac{\\tilde{g}_{hWW}^2}{m_W^4} : \n \\frac{\\tilde{g}_{hZZ}^2}{m_Z^4} : 0 : 0 \\ .\n\\end{equation}\nIn terms of branching fractions, normalized to the branching ratio into $WW$, we have\n\\begin{eqnarray}\n&& {Br}_S(ZZ\/WW) = \\rho^2 c_w^4 {\\tilde{g}_{hZZ}^2}\/{\\tilde{g}_{hWW}^2}\\approx c_w^4 {\\tilde{g}_{hZZ}^2}\/{\\tilde{g}_{hWW}^2}\\ , \\\\\n&& {Br}_S(Z\\gamma\/WW) \\approx {Br}_S(\\gamma\\gamma\/WW) \\approx 0 \\ ,\n\\end{eqnarray}\nwhere $Br_S(V_1V_2\/WW)\\equiv Br(S\\to V_1V_2)\/Br(S\\to WW)$. Custodial symmetry then predicts unique patterns of decay branching fractions for $h_1^0$ and $h_5^0$:\n\\begin{eqnarray}\n\\label{eq:doubpatt1}\n&& {Br}_{h_1^0}(ZZ\/WW) \\approx \\frac12 \\ , \\qquad {Br}_{h_1^0}(Z\\gamma\/WW) \\approx {Br}_{h_1^0}(\\gamma\\gamma\/WW) \\approx 0 \\ , \\\\\n&& {Br}_{h_5^0}(ZZ\/WW) \\approx 2 \\ , \\qquad {Br}_{h_5^0}(Z\\gamma\/WW) \\approx {Br}_{h_5^0}(\\gamma\\gamma\/WW) \\approx 0 \\ .\n\\end{eqnarray}\nWe see that a simple counting experiment would allow us to infer the $SU(2)_C$ quantum number of the decaying scalar!\n\n\\begin{figure}[t]\n\\includegraphics[scale=1]{fig1.eps}\n\\caption{\\label{fig:fig1}\\it Ratio of branching fractions into $WW$ and $ZZ$, $Br(ZZ\/WW)$, for an $SU(2)_C$ singlet and a 5-plet, as a function of the scalar mass.}\n\\end{figure} \n\nIn Fig.~\\ref{fig:fig1} we plot the ratio $Br(ZZ\/WW)$ for an $SU(2)_C$ singlet and a 5-plet, including the full kinematic dependence of the gauge boson masses, for the scalar mass between 115 GeV and 1 TeV. We include the decay into off-shell vector bosons using the expression in Eq.~(\\ref{eq:offtotalwidth}) for the scalar mass below the $W$ and\/or $Z$ threshold. Fig.~\\ref{fig:fig1} is the unique prediction of custodial symmetry. Any deviation would imply either the electroweak singlet nature of the scalar or significant violation of custodial symmetry, which in turns suggest cancellation in the $\\rho$ parameter at the percent level.\n\n\n\\begin{figure}[t]\n\\includegraphics[scale=1]{fig2.eps}\n\\includegraphics[scale=1]{fig3.eps}\n\\caption{\\label{fig:fig2}\\it The predicted ratios of branchings, as a function of $Br(ZZ\/WW)$, for an electroweak singlet scalar. The red (gray) curves are for\n$Br(\\gamma\\gamma\/WW)$ and black curves for $Br(Z\\gamma\/WW)$. In this plot we assume the branching into $WW$ is nonzero.}\n\\end{figure} \n\nOn the other hand, using Eqs.~(\\ref{eq:onshellfinal}), (\\ref{eqn:onshellZgamma}), and (\\ref{eq:sgaga}) in the appendix, an electroweak singlet has the following the partial decay widths into two on-shell electroweak bosons \n\\begin{eqnarray}\n\\label{eq:sWWwid}\n\\Gamma(s\\to WW) &=& \\frac1{32\\pi} g_{sWW}^2\\ m_s \\sqrt{1-4x}(1-4x+6x^2) \\ , \\\\\n\\label{eq:sZZwid}\n\\Gamma(s\\to ZZ) &=& \\frac1{64\\pi} g_{sZZ}^2\\ m_s \\sqrt{1-4x}(1-4x+6x^2)\\ , \\\\\n\\label{eq:sggwid}\n\\Gamma(s\\to Z\\gamma) &=& \\frac1{32\\pi} g_{sZ\\gamma}^2\\ m_s (1-x^2)^3 \\ , \\\\\n\\Gamma(s\\to \\gamma\\gamma) &=& \\frac1{64\\pi} g_{s\\gamma\\gamma}^2\\, m_s \\ ,\n\\end{eqnarray}\nwhere the $g_{sV_1V_2}$ couplings are given in Eq.~(\\ref{eq:singscoup}).\nThe pattern of partial decay widths into two electroweak bosons is then, again ignoring the effect of gauge boson masses,\n\\begin{equation}\nBr_s(V_1V_2\/WW) = \\delta_{V_1V_2} \\frac{g_{sV_1V_2}^2}{2g_{sWW}^2}\\ .\n\\end{equation}\nwhere $V_1V_2 = \\{ZZ,Z\\gamma,\\gamma\\gamma\\}$, and $\\delta_{V_1V_2}$ is 2 for $Z\\gamma$ and 1 otherwise. This pattern is generically different from that in Eq.~(\\ref{eq:doubpatt}), where the couplings arise from the Higgs mechanism. More importantly, there are only two unknowns $\\kappa_1$ and $\\kappa_2$. So the branching fractions into $Z\\gamma$ and $\\gamma\\gamma$, normalized to $WW$ mode, could be predicted as follows:\n\\begin{eqnarray}\nBr_s(Z\\gamma\/WW)&\\approx& \\frac{c_w^2}{s_w^2} \\left[\\sqrt{2 Br_s(ZZ\/WW)} -1 \\right]^2 \\ ,\\\\\nBr_s(\\gamma\\gamma\/WW)&\\approx&\\frac12 \\left[ \\frac{c_w^2}{s_w^2} \\sqrt{2 Br_s(ZZ\/WW)} + 1- \\frac{c_w^2}{s_w^2} \\right]^2 \\ .\n\\end{eqnarray}\nIn Fig.~\\ref{fig:fig2} we plot the predicted $Br(Z\\gamma\/WW)$ and $Br(\\gamma\\gamma\/WW)$ branching fractions in terms of $Br(ZZ\/WW)$. Experimental verification of these relations would be a striking confirmation of the singlet nature of the scalar resonance.\n\n\nBy inspection of Eq.~(\\ref{eq:singscoup}) we see that a special case occurs when $\\kappa_2=\\kappa_1$, giving\n$Br_s(ZZ\/WW)=1\/2$, similar to that of $h_1^0$. However, in this case we have\n\\begin{equation}\nBr_s(Z\\gamma\/WW) \\approx 0 \\ , \\qquad Br_s(\\gamma\\gamma\/WW) \\approx \\frac12 \\ ,\n\\end{equation}\nup to corrections due to the mass of the $Z$ boson. By considering all four \npartial widths into the electroweak bosons it is still possible to distinguish \na singlet scalar from the Higgs doublet even in this special case. However, \nas commented in the end of Section \\ref{sect:section2}, such a scenario lacks\nany obvious physical motivation.\n\n\n\n\nAnother special case is when $\\kappa_1$=0, which occurs in the event that \nthe new fermions inducing the dimension-five operators in Eq.~(\\ref{eq:singletsu2}) \ncarry only hypercharge and no isospin. This case is not included in Fig.~\\ref{fig:fig2}\nsince the partial width of the scalar decaying into $WW$ vanishes! Nevertheless, \nthere would still be significant decay branching fractions into $ZZ$, $Z\\gamma$, \nand $\\gamma\\gamma$ states, as predicted by Eq.~(\\ref{eq:singscoup}).\n\n\n\n\\begin{table}[t]\n\\begin{tabular}{|c|c|c|c|} \\hline\n\\makebox[3cm]{$m_S$ (GeV)} & \\makebox[4cm]{$Br(ZZ\/WW)$} & \\makebox[4cm]{$Br(Z\\gamma\/WW)$} & \\makebox[4cm]{$Br(\\gamma\\gamma\/WW)$}\n\\\\ \\hline \\hline\n130 & 0.13 (0.13) & $4.3\\times 10^{-2}$ ($6.7\\times 10^{-3}$) & $3.8\\times 10^{2}$ ($7.8\\times 10^{-3}$) \\\\ \\hline\n150 & 0.12 (0.12) & $1.9\\times 10^{-2}$ ($3.5\\times 10^{-3}$) & 65 ($2.0\\times 10^{-3}$)\\\\ \\hline\n170 & $2.3 \\times 10^{-2}$ ($2.3 \\times 10^{-2}$) & $7.8\\times 10^{-2}$ ($4.1\\times 10^{-4}$) & 1.9 ($1.6\\times 10^{-4}$) \\\\ \\hline\n200 & 0.36 (0.36) & $7.3\\times 10^{-2}$ ($2.4\\times 10^{-4}$) & 3.3 ($\\alt 10^{-4}$) \\\\ \\hline\n300 & 0.44 (0.44) & $1.1\\times 10^{-3}$ ($\\alt 10^{-4}$) & $0.91$ ($\\alt 10^{-4}$) \\\\ \\hline\n400 & 0.47 (0.47) & $\\alt 10^{-4}$ ($\\alt 10^{-4}$) & $0.68$ ($\\alt 10^{-4}$) \\\\ \\hline\n\\end{tabular}\n\\caption{\\label{table1}\\em Ratios of branching fractions for an electroweak singlet scalar when $Br(ZZ\/WW)$ is tuned to the SM value. The value in the parenthesis is for the corresponding SM prediction.}\n\\end{table}\n\nIn Table I we list the ratios of branching fractions for an electroweak singlet, when $Br_s(ZZ\/WW)$ of the scalar is ``tuned'' to fake that\nof a SM Higgs doublet. We see in all cases $Br_s(Z\\gamma\/WW)$ and $Br_s(\\gamma\\gamma\/WW)$ are enhanced over that of the\nSM ratios, especially in the low mass region, when the difference could reach five orders of magnitude at $m_S=130$ GeV for $Br_s(\\gamma\\gamma\/WW)$. The reason behind the enhancement is quite easy to understand: the singlet coupling strengths to all four vector boson pairs are all in the same order. Thus decays into massive final states such as $ZZ$ and $WW$ are suppressed due to phase space and kinematic factors, especially in the low scalar mass region when $WW$ and $ZZ$ channels are off-shell.\n To the contrary, in the SM the Higgs couplings to $WW$ and $ZZ$ arise at the tree-level while the couplings to $Z\\gamma$ and $\\gamma\\gamma$ come from dimension-five operators at the one-loop level. So decays into massive final states could still dominate even below the kinematic threshold.\n\nAnother interesting case is exhibited in Table II, where $Br_s(\\gamma\\gamma\/WW)$ is dialed to fake that of the SM Higgs. In this case the $ZZ$ channel is suppressed relative to the $WW$ channel, while the $Z\\gamma$ channel is significantly enhanced. The importance of $Z\\gamma$ decays is notable, since this channel is so far neglected in the physics planning of the LHC experiments.\n\n\\begin{table}[t]\n\\begin{tabular}{|c|c|c|c|} \\hline\n\\makebox[3cm]{$m_S$ (GeV)} & \\makebox[4cm]{$Br(\\gamma\\gamma\/WW)$} & \\makebox[4cm]{$Br(ZZ\/WW)$} & \\makebox[4cm]{$Br(Z\\gamma\/WW)$}\n\\\\ \\hline \\hline\n115 & $2.7\\times 10^{-2}$ ($2.7\\times 10^{-2}$) & $5.1\\times10^{-2}$ (0.11) & 39 ($9.0\\times 10^{-3}$) \\\\ \\hline\n120 & $1.7\\times 10^{-2}$ ($1.7\\times 10^{-2}$) & $5.7\\times 10^{-2}$ (0.11) & 35 ($8.2\\times 10^{-3}$)\\\\ \\hline\n130 & $7.8\\times 10^{-3}$ ($7.8\\times 10^{-3}$) & $6.7\\times 10^{-2}$ (0.13) & 26 ($6.7\\times 10^{-3}$) \\\\ \\hline\n140 & $4.0\\times 10^{-3}$ ($4.0\\times 10^{-3}$) & $7.1\\times 10^{-2}$ (0.14) & 18 ($5.1\\times 10^{-3} $) \\\\ \\hline\n150 & $2.0\\times 10^{-3}$ ($2.0\\times 10^{-3}$) & $6.4\\times10^{-2}$ (0.12) & 10 ($3.5\\times 10^{-3}$) \\\\ \\hline\n170 & $1.6\\times 10^{-4}$ ($1.6\\times 10^{-4}$)& $1.4 \\times10^{-2}$ ($2.3 \\times 10^{-2}$) & $0.81$ ($4.1\\times 10^{-4}$) \\\\ \\hline\n\\end{tabular}\n\\caption{\\label{table2}\\em Ratios of branching fractions for an electroweak singlet scalar when $Br(\\gamma\\gamma\/WW)$ is tuned to the SM value. The value in the parenthesis is for the corresponding SM prediction.}\n\\end{table}\n\nIf one makes the assumption that the individual partial decay width of a scalar decaying into to $V_1V_2$ could be obtained, presumably in a future lepton collider or with a very high integrated luminosity at the LHC, then we could explore the possibility of determining the $(\\mathbf{N}_L,\\mathbf{N}_R)$ multiplet structure under $SU(2)_L \\times SU(2)_R$. The specific question one could ask, given that the $SU(2)_C$ singlet from all $(\\mathbf{N}_L,\\mathbf{N}_R)$ multiplet has the same ratio of couplings to $WW$ and $ZZ$, is whether it is possible to distinguish the $SU(2)_C$ singlet contained in a $(\\mathbf{2}_L,\\mathbf{2}_R)$ from that contained in a $(\\mathbf{3}_L,\\mathbf{3}_R)$. To this end we observe that the couplings, $g_{h_{1}^0WW}$ and $g_{h_{1}^0ZZ} $\nin Eqs.~(\\ref{eq:h10ww}) and (\\ref{eq:h10zz}), and the gauge boson masses in Eqs.~(\\ref{eq:mwcus}) and (\\ref{eq:mzcus}) are given by two parameters: $N$ and the scalar VEV $v$. Solving for $v$ in terms of the masses and $N$ we obtain\n\\begin{equation}\ng_{h_1^0WW}=g_{h_1^0ZZ}\\, c_w^2= \\sqrt{\\frac{N^2-1}3} \\, g m_W \\ ,\n\\end{equation}\nTherefore the coupling becomes stronger as $N$ increases. The Higgs doublet has $N=2$, while the coupling of the $h_1^0$ in the $(\\mathbf{N}_L,\\mathbf{N}_R)$ is $\\sqrt{(N^2-1)\/3}$ times larger than that in the Higgs doublet, resulting in a partial decay width that is $(N^2-1)\/3$ enhanced. Once $N$ is known, the complete $SU(2)_L\\times U(1)_Y$ quantum number of the scalar resonance is determined.\n\n\n\n\nAs an example, at the LHC one could consider the production of the scalar in the vector boson fusion channels\n$WW\/ZZ \\to S \\to WW$ and $WW\/ZZ \\to S \\to ZZ$, which provide estimates of\n\\begin{eqnarray}\n\\label{eq:vbfguys}\n(\\Gamma_{WW} + \\Gamma_{ZZ})\\frac{\\Gamma_{WW}}{\\Gamma_t}\n\\;\\;{\\rm and}\\; \\;\n(\\Gamma_{WW} + \\Gamma_{ZZ})\\frac{\\Gamma_{ZZ}}{\\Gamma_t}\n\\; .\n\\end{eqnarray}\nThe total width $\\Gamma_t$ could be extracted by measuring the Breit-Wigner shape of the invariant mass spectrum in the $ZZ$ channel. Then one could simply fit the partial widths $\\Gamma_{WW}$ and $\\Gamma_{ZZ}$ using the different hypothesis for $N$. Since the event rate in this case is proportional to $\\Gamma_{WW\/ZZ}^2$, if the total width remains the same the enhancement of a $N\\ge 3$ multiplet over the Higgs doublet is $(N^2-1)^2\/9\\ge 64\/9\\approx 7$, which is a significant enhancement.\n\n\n\n\\section{Discussion and outlook}\n\\label{sect:section5}\n\nWe have performed a general analysis up to dimension five of the couplings\nbetween electroweak vector boson pairs $V_1V_2$ and a Higgs look-alike $S$, assumed to\nbe a neutral $CP$-even scalar resonance. We used the framework of unbroken\ncustodial symmetry to group the possibilities into three ``pure cases'':\nscalars whose electroweak properties match a SM Higgs, scalars that are\n$SU(2)_L\\times SU(2)_R$ singlets and thus couple to $V_1V_2$ only at dimension\nfive, and scalars that couple to $V_1V_2$ as a 5-plet under custodial $SU(2)_C$.\n\nFig.~\\ref{fig:fig1} shows that it should be straightforward to experimentally\ndistinguish the 5-plet case from the SM-like case of a custodial singlet, using\njust the ratio of the $ZZ$ and $WW$ decay rates. \nFig.~\\ref{fig:fig2}\nillustrates that $SU(2)_L\\times SU(2)_R$ singlets produce distinctive\nrelations between the various ratios of $V_1V_2$ decay rates, emphasizing the\nimportance of detecting all four decay channels: $WW$, $ZZ$, $\\gamma\\gamma$,\nand $Z\\gamma$. \n\nTo implement our proposal one can either try to extract ratios of partial decay widths directly \\cite{Zeppenfeld:2002ng}, or measure the individual partial decay widths into pairs of electroweak vector bosons first \\cite{Duhrssen:2004cv, Lafaye:2009vr} and then take the ratios. In the first possibility the event rate measured in each decay channel of a scalar resonance $S$ is given by \n\\begin{equation}\nB\\sigma(V_1V_2)= \\sigma(S)\\times Br(S\\to V_1V_2) \\ .\n\\end{equation}\nTherefore one could approximate the ratio of partial decay widths by the ratio of event rates in each channel, which are measured directly in collider experiments. It would be interesting to study ways to improve on the uncertainty arising from either possibilities.\n\n\nSince experimental analyses are often driven by final states observed, our study demonstrates the importance of having a correlated understanding of all decay channels into pairs of electroweak vector bosons to avoid misidentification. Tables I and II show how one can be badly fooled by measuring only two of the\nelectroweak $V_1V_2$ decay channels for a candidate Higgs. The tables were generated from\nthe predicted properties of a neutral $CP$-even spin 0 ``Higgs'' that\nis in fact an $SU(2)_L \\times SU(2)_R$ singlet imposter. In Table 1 the coefficients\n$\\kappa_1$, $\\kappa_2$ of the dimension-five operators in Eq.~(\\ref{eq:singletsu2})\nhave been adjusted so that the ratio of branching fractions of $S\\to ZZ$ over\n$S\\to WW$ coincides with the SM value for the given masses $m_S$.\nIn Table II the same coefficients\nhave been adjusted so that the branching ratio of $S\\to \\gamma\\gamma$ over\n$S\\to WW$ coincides with the SM value.\nIn both cases measurement of the two remaining $V_1V_2$ decay rates\nunmasks the Higgs imposter in dramatic fashion.\n\n\n\nIn a real experiment, the analysis suggested here could be folded into\nhypothesis testing based on likelihood ratios designed to expose the spin and $CP$ properties of new\nheavy resonances \\cite{DeRujula:2010ys,Gao:2010qx}.\nHigher order effects could be included, as well as the uncertainties\nassociated with unfolding the experimental data to extract the $S\\to V_1V_2$ \nproduction and decay properties.\n\n\n\\begin {acknowledgements}\nWe are grateful to Marcela Carena, Riccardo Rattazzi, and Maria Spiropulu for interesting discussions, and\nto Alvaro De R\\'ujula for coining the phrase ``Higgs imposters''.\nI.~L. was supported in part by the U.S. Department of Energy under\ncontracts No. DE-AC02-06CH11357 and No. DE-FG02-91ER40684.\nFermilab is operated by the Fermi\nResearch Alliance LLC under contract DE-AC02-07CH11359 with the\nU.S. Department of Energy.\n\\end{acknowledgements}\n\n\n\\section*{Appendix}\n\nWe consider a massive spin-0 particle $S$ decaying to two off-shell vector bosons $V_1^*$, $V_2^*$. In the rest\nframe of $S$, and choosing the positive $z$-axis along the direction of $V_2$, the 4-momenta can be written:\n\\begin{equation}\np_S = (m_S,0,0,0) \\, \\quad p_1 = m_1(\\gamma_1, 0, 0, -\\beta_1 \\gamma_1) \\ , \\quad\np_2 = m_2(\\gamma_2, 0, 0, \\beta_2 \\gamma_2) \\ ,\n\\end{equation}\nwhere $m_1$, $m_2$ are the off-shell vector boson masses, and the boosts\nfactors $\\gamma_1$, $\\gamma_2$, $\\beta_1$, $\\beta_2$ are defined by\n\\begin{eqnarray}\n\\label{eqn:cha}\n\\gamma_1 &=& \\frac{m_S}{2m_1}\\left( 1 + \\frac{m_1^2 - m_2^2}{m_S^2} \\right) \\; , \\quad\n\\gamma_2 = \\frac{m_S}{2m_2}\\left( 1 - \\frac{m_1^2 - m_2^2}{m_S^2} \\right) \\; ,\\\\\n\\label{eqn:sha}\n\\beta_1\\gamma_1&=& \\frac{m_S}{2m_1}\\sqrt{\\left( 1 - \\frac{(m_1+m_2)^2}{m_S^2} \\right)\\left( 1 - \\frac{(m_1-m_2)^2}{m_S^2} \\right)} \\; ,\\\\\n\\label{eqn:shb}\n\\beta_2\\gamma_2 &=& \\frac{m_S}{2m_2}\\sqrt{\\left( 1 - \\frac{(m_1+m_2)^2}{m_S^2} \\right)\\left( 1 - \\frac{(m_1-m_2)^2}{m_S^2} \\right)} \\; .\n\\end{eqnarray}\nWe will use the following convenient notation:\n\\begin{equation}\n\\gamma_a = \\gamma_1\\gamma_2(1 + \\beta_1\\beta_2) = \\ch(y_2 - y_1)\\; , \\qquad\n\\gamma_b = \\gamma_1\\gamma_2(\\beta_1 + \\beta_2) = \\sh{(y_2 - y_1)} \\; ,\n\\end{equation}\nwhere $y_1$ and $y_2$ are the vector boson rapidities, as well as the following useful identities:\n\\begin{equation}\n \\gamma_a^2 - \\gamma_b^2 = 1 \\; , \\qquad \n \\gamma_a = \\frac{1}{2m_1m_2}\\left[ m_S^2 - (m_1^2+m_2^2) \\right] \\; , \\qquad\n\\label{eqn:bgamident}\n \\gamma_b = \\frac{m_S}{m_1}\\beta_2\\gamma_2 \\; .\n\\end{equation}\n\nIt is very convenient to compute the decay widths using helicity amplitudes.\nFor this purpose we need to choose a consistent basis for the polarization vectors\nof the vector bosons:\n\\begin{eqnarray}\n\\epsilon_2(\\lambda_2=\\pm) &=& \\pm\\frac{1}{\\sqrt{2}}(0, 1, \\pm i, 0) \\ , \\quad \\epsilon_2(\\lambda_2=0) = (\\beta_2\\gamma_2, 0, 0, \\gamma_2) \\\\\n\\epsilon_1(\\lambda_1=\\mp) &=& \\pm \\frac{1}{\\sqrt{2}}(0, 1,\\pm i, 0) \\ , \\quad\n\\epsilon_1(\\lambda_1=0) = (\\beta_1\\gamma_1, 0, 0, -\\gamma_1) \n\\end{eqnarray}\nwhere $\\lambda_1$, $\\lambda_2$ label the transverse and longitudinal polarizations.\n\nLast but not least we will also need an expression for the two-body phase space:\n\\begin{eqnarray}\nd\\Phi_2(p_S; p_1,p_2) &=& \\frac{d^3p_1 d^3p_2}{(2\\pi)^3 2E_1 (2\\pi)^3 2E_2} (2\\pi)^4 \\delta^4(p_S-p_1-p_2) \\\\\n&=& \\frac{1}{16\\pi^2}\\frac{\\vert \\vec{p}_1 \\vert}{m_S} \\; d{\\rm cos}\\,\\theta \\; d\\phi\n\\end{eqnarray}\nwhere $\\theta$, $\\phi$ are the polar and azimuthal angles between the direction of $V_2$ and\nsome other reference direction, e.g. the direction of the boost from the lab frame to the $S$ rest\nframe, or the direction of the beam. Note that\n\\begin{equation}\n\\vert \\vec{p}_1 \\vert = \\vert \\vec{p}_2\\vert = m_1\\beta_1\\gamma_1 = m_2\\beta_2\\gamma_2\n=\\frac{m_1m_2}{m_S}\\gamma_b \\; .\n\\end{equation}\n\nIt is important to remember that when $V_1$, $V_2$ are distinguishable particles,\nwe integrate $\\theta$, $\\phi$ over the full $4\\pi$ solid angle. However when $V_1$,\n$V_2$ are identical particles (e.g. two $Z$'s or two $\\gamma$'s) we should only\nintegrate $\\theta$ from zero to $\\pi\/2$, to avoid counting the same final state\nconfiguration twice. Thus the angular integration gives $2\\pi$ in this case, not\n$4\\pi$. \n\nThe differential off-shell decay width can be written:\n\\begin{equation}\n\\frac{d^2\\Gamma (S\\to V_1^* V_2^*)}{dm_1^2 dm_2^2}\n= \\frac{2\\pi \\delta_V}{2 m_S} \\frac{m_1m_2\\gamma_b}{16\\pi^2m_S^2} \n\\, P_1P_2 \\hspace*{-10pt}\n\\sum_{\\lambda_1,\\lambda_2 = \\pm,0} \n\\left\\vert \\Gamma^{\\mu\\nu}_{SV_1V_2} \\epsilon^*_{\\mu}(\\lambda_1)\\epsilon^*_{\\nu}(\\lambda_2)\n\\right\\vert^2\n\\end{equation}\nwhere $\\delta_V =1$ for identical vector bosons and 2 otherwise. Here $\\Gamma^{\\mu\\nu}_{SV_1V_2} $\nis the $SV_1V_2$ coupling tensor that can be read off from the Lagrangian. The propagator\nfactors\n\\begin{equation}\nP_i = \\frac{M_{V_i}\\Gamma_{V_i}}{\\pi}\\frac{1}\n{(m_i^2-M_{V_i}^2)^2+M_{V_i}^2\\Gamma_{V_i}^2}\n\\end{equation}\nbecome just $\\delta(m_i^2 - M_{V_i}^2)$ in the narrow width approximation. We will write the coupling tensor as\n\\begin{equation}\n\\Gamma^{\\mu\\nu}_{SV_1V_2} = \\left( \\tilde{g}_{hV_1V_2} + \\frac{\\tilde{g}_{sV_1V_2}}{m_S} p_1\\cdot p_2 \\right) \\; g^{\\mu\\nu} - \\frac{\\tilde{g}_{sV_1V_2}}{m_S}\\, p_1^\\nu p_2^\\mu \\ ,\n\\end{equation}\nwhere the coupling constants $g_h$ and $g_s$ are defined as coefficients of the following operators\n\\begin{equation} \n \\frac{\\delta_V}2 \\left( \\tilde{g}_{hV_1V_2}\\,S\\, V_1^\\mu V_{2\\, \\mu} + \\frac{\\tilde{g}_{sV_1V_2}}{2m_S} S\\, V_1^{\\mu\\nu} V_{2\\, \\mu\\nu} \\right) \\ .\n \\end{equation}\nIn the standard model $\\tilde{g}_{hV_1V_2}^2=8 m_1^2 m_2^2 G_F\/\\sqrt{2}$ for $WW$ and $ZZ$ channels and all other couplings vanish at the tree-level, while for an electroweak singlet scalar $\\tilde{g}_{hV_1V_2}=0$. By angular momentum conservation the only nonvanishing contributions\nfrom the helicity sums are for $(\\lambda_1,\\lambda_2) = (\\pm,\\pm)$, or $(0,0)$:\n\\begin{eqnarray}\n\\sum_{(\\lambda_1,\\lambda_2)} \\left| \\Gamma^{\\mu\\nu} \\epsilon_\\mu^*(\\lambda_1) \\epsilon_\\nu^*(\\lambda_2)\\right|^2 &=& \n\\left|\\tilde{g}_{hV_1V_2}\\right|^2 (2+ \\gamma_a^2) +\\frac{m_1^2 m_2^2}{m_S^2} |\\tilde{g}_{sV_1V_2}|^2 (2\\gamma_a^2+1)\\nonumber \\\\\n&& +\\frac{6m_1m_2\\gamma_a}{m_S}\\, \\Re(\\tilde{g}_{hV_1V_2}\\, \\tilde{g}_{sV_1V_2}^*) ,\n\\end{eqnarray}\nwhere $\\Re(c)$ is the real part of the complex number $c$. Then the off-shell decay width is\n\\begin{eqnarray}\n\\frac{d\\Gamma(S\\to V_1^*V_2^*)}{dm_1^2dm_2^2}& =& \\frac{2\\pi \\delta_V}{2m_S} \\frac{m_1m_2\\gamma_b}{16\\pi^2m_S^2}\\left[ \\left|\\tilde{g}_{hV_1V_2}\\right|^2(2+ \\gamma_a^2) +\n |\\tilde{g}_{sV_1V_2}|^2\\,\\frac{m_1^2 m_2^2}{m_S^2} (2\\gamma_a^2+1) \\right. \\nonumber \\\\\n && \\qquad \\left.+ \\Re(\\tilde{g}_{hV_1V_2}\\, \\tilde{g}_{sV_1V_2}^*)\\, \\frac{6m_1m_2\\gamma_a}{m_S} \\right] P_1 P_2\\ .\n \\end{eqnarray}\n The total decay width of $S\\to V_1^*V_2^*$ is given by\n \\begin{equation}\n \\label{eq:offtotalwidth}\n \\Gamma(S\\to V_1^*V_2^*) = \\int_0^{m_S^2} dm_1^2 \\int_0^{\\left(m_S-\\sqrt{m_1^2}\\right)^2} dm_2^2\\, \\frac{d\\Gamma(S\\to V_1^*V_2^*)}{dm_1^2dm_2^2} \\ .\n \\end{equation}\nThe above formula is valid even when the scalar mass crosses the mass thresholds of $W$ and $Z$ bosons. More explicitly, when both vector bosons are on-shell, $m_1 \\to m_V$, $m_2 \\to m_V$, we have\n \\begin{eqnarray}\n \\label{eq:onshellfinal}\n\\Gamma(S\\to V_1V_2) &=& \\frac{\\delta_V}{32\\pi m_S}\\sqrt{1-4x} \\left\\{\\left|\\tilde{g}_{hV_1V_2}\\right|^2 \\frac1{4x^2}(1-4x+12x^2) \\right. \\nonumber \\\\\n&& \\qquad \\left.+ |\\tilde{g}_{sV_1V_2}|^2 \\frac{m_S^2}2(1-4x+6x^2) + \\Re(\\tilde{g}_{hV_1V_2}\\, \\tilde{g}_{sV_1V_2}^*)\\, 3m_S(1-2x) \\right \\}.\n\\end{eqnarray}\nFor a standard model Higgs boson, $h$, we recover the well-known expression \\cite{Lee:1977eg}\n\\begin{equation}\n\\Gamma (h\\to V_1 V_2) = \\delta_V \\frac{G_F}{\\sqrt{2}}\n\\frac{m_h^3}{16\\pi} \\sqrt{1-4x} (1-4x+12x^2) \\ .\n\\end{equation}\n\nIn the case of $S \\to Z^*\\gamma$, we have to take into account that only the\ntransverse polarizations contribute, and take the limit $m_2 \\to 0$.\nAs $m_2 \\to 0$ \n\\begin{equation}\n\\frac{d\\Gamma (s\\to Z^* \\gamma)}{dm_1^2}\n= \\frac{1}{32\\pi} \\, \\vert \\tilde{g}_{sZ\\gamma} \\vert^2\\,\nm_S\\, \\left(1-\\frac{m_1^2}{m_S^2}\\right)^3 \\,P_1 \\ .\n\\label{eqn:offshellZgamma}\n\\end{equation}\nWhen the $Z$ is on-shell this becomes\n\\begin{equation}\n\\Gamma (S\\to Z \\gamma)\n= \\frac{1}{32\\pi} \\, \\vert \\tilde{g}_{sZ\\gamma} \\vert^2\\,\nm_S\\, (1-x)^3 \\ .\n\\label{eqn:onshellZgamma}\n\\end{equation}\nThe width for $S\\to \\gamma\\gamma$ follows from this\n(note we divide by 2 to get the correct phase space):\n\\begin{equation}\n\\label{eq:sgaga}\n\\Gamma (S\\to \\gamma\\gamma)\n= \\frac{1}{64\\pi} \\, \\vert \\tilde{g}_{s\\gamma\\gamma} \\vert^2\\,\nm_S \\ .\n\\end{equation}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}